Dec 02 14:28:56 crc systemd[1]: Starting Kubernetes Kubelet... Dec 02 14:28:56 crc restorecon[4695]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:56 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 14:28:57 crc restorecon[4695]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 14:28:57 crc restorecon[4695]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 02 14:28:57 crc kubenswrapper[4814]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 14:28:57 crc kubenswrapper[4814]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 02 14:28:57 crc kubenswrapper[4814]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 14:28:57 crc kubenswrapper[4814]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 14:28:57 crc kubenswrapper[4814]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 02 14:28:57 crc kubenswrapper[4814]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.981125 4814 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985891 4814 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985918 4814 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985924 4814 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985929 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985934 4814 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985940 4814 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985945 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985950 4814 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985954 4814 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985958 4814 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985962 4814 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985966 4814 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985970 4814 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985973 4814 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985977 4814 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985981 4814 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985985 4814 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985991 4814 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.985995 4814 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986000 4814 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986045 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986049 4814 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986054 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986058 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986062 4814 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986066 4814 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986070 4814 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986073 4814 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986077 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986081 4814 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986085 4814 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986089 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986096 4814 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986101 4814 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986105 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986109 4814 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986114 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986119 4814 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986125 4814 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986131 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986136 4814 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986141 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986147 4814 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986151 4814 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986155 4814 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986159 4814 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986163 4814 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986167 4814 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986171 4814 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986175 4814 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986179 4814 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986183 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986186 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986190 4814 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986194 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986220 4814 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986224 4814 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986249 4814 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986253 4814 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986257 4814 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986262 4814 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986266 4814 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986270 4814 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986273 4814 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986276 4814 feature_gate.go:330] unrecognized feature gate: Example Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986280 4814 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986284 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986287 4814 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986291 4814 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986294 4814 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.986298 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.986985 4814 flags.go:64] FLAG: --address="0.0.0.0" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.986998 4814 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987008 4814 flags.go:64] FLAG: --anonymous-auth="true" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987014 4814 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987021 4814 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987026 4814 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987032 4814 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987038 4814 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987043 4814 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987047 4814 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987052 4814 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987056 4814 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987060 4814 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987064 4814 flags.go:64] FLAG: --cgroup-root="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987068 4814 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987072 4814 flags.go:64] FLAG: --client-ca-file="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987076 4814 flags.go:64] FLAG: --cloud-config="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987081 4814 flags.go:64] FLAG: --cloud-provider="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987085 4814 flags.go:64] FLAG: --cluster-dns="[]" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987091 4814 flags.go:64] FLAG: --cluster-domain="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987096 4814 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987101 4814 flags.go:64] FLAG: --config-dir="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987105 4814 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987110 4814 flags.go:64] FLAG: --container-log-max-files="5" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987115 4814 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987120 4814 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987124 4814 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987129 4814 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987133 4814 flags.go:64] FLAG: --contention-profiling="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987137 4814 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987142 4814 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987147 4814 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987151 4814 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987157 4814 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987161 4814 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987165 4814 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987169 4814 flags.go:64] FLAG: --enable-load-reader="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987173 4814 flags.go:64] FLAG: --enable-server="true" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987178 4814 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987183 4814 flags.go:64] FLAG: --event-burst="100" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987188 4814 flags.go:64] FLAG: --event-qps="50" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987193 4814 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987197 4814 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987201 4814 flags.go:64] FLAG: --eviction-hard="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987207 4814 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987211 4814 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987215 4814 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987219 4814 flags.go:64] FLAG: --eviction-soft="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987223 4814 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987246 4814 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987252 4814 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987259 4814 flags.go:64] FLAG: --experimental-mounter-path="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987265 4814 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987269 4814 flags.go:64] FLAG: --fail-swap-on="true" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987274 4814 flags.go:64] FLAG: --feature-gates="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987279 4814 flags.go:64] FLAG: --file-check-frequency="20s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987283 4814 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987288 4814 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987292 4814 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987297 4814 flags.go:64] FLAG: --healthz-port="10248" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987301 4814 flags.go:64] FLAG: --help="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987305 4814 flags.go:64] FLAG: --hostname-override="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987309 4814 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987314 4814 flags.go:64] FLAG: --http-check-frequency="20s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987318 4814 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987322 4814 flags.go:64] FLAG: --image-credential-provider-config="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987326 4814 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987330 4814 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987334 4814 flags.go:64] FLAG: --image-service-endpoint="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987339 4814 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987342 4814 flags.go:64] FLAG: --kube-api-burst="100" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987347 4814 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987352 4814 flags.go:64] FLAG: --kube-api-qps="50" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987356 4814 flags.go:64] FLAG: --kube-reserved="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987360 4814 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987364 4814 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987369 4814 flags.go:64] FLAG: --kubelet-cgroups="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987374 4814 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987378 4814 flags.go:64] FLAG: --lock-file="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987382 4814 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987387 4814 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987391 4814 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987400 4814 flags.go:64] FLAG: --log-json-split-stream="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987405 4814 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987409 4814 flags.go:64] FLAG: --log-text-split-stream="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987414 4814 flags.go:64] FLAG: --logging-format="text" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987418 4814 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987422 4814 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987427 4814 flags.go:64] FLAG: --manifest-url="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987431 4814 flags.go:64] FLAG: --manifest-url-header="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987436 4814 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987441 4814 flags.go:64] FLAG: --max-open-files="1000000" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987446 4814 flags.go:64] FLAG: --max-pods="110" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987451 4814 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987455 4814 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987459 4814 flags.go:64] FLAG: --memory-manager-policy="None" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987464 4814 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987468 4814 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987472 4814 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987476 4814 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987486 4814 flags.go:64] FLAG: --node-status-max-images="50" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987490 4814 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987494 4814 flags.go:64] FLAG: --oom-score-adj="-999" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987498 4814 flags.go:64] FLAG: --pod-cidr="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987502 4814 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987509 4814 flags.go:64] FLAG: --pod-manifest-path="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987513 4814 flags.go:64] FLAG: --pod-max-pids="-1" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987518 4814 flags.go:64] FLAG: --pods-per-core="0" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987522 4814 flags.go:64] FLAG: --port="10250" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987526 4814 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987530 4814 flags.go:64] FLAG: --provider-id="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987534 4814 flags.go:64] FLAG: --qos-reserved="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987540 4814 flags.go:64] FLAG: --read-only-port="10255" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987545 4814 flags.go:64] FLAG: --register-node="true" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987550 4814 flags.go:64] FLAG: --register-schedulable="true" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987554 4814 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987562 4814 flags.go:64] FLAG: --registry-burst="10" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987567 4814 flags.go:64] FLAG: --registry-qps="5" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987571 4814 flags.go:64] FLAG: --reserved-cpus="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987576 4814 flags.go:64] FLAG: --reserved-memory="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987581 4814 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987585 4814 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987590 4814 flags.go:64] FLAG: --rotate-certificates="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987593 4814 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987598 4814 flags.go:64] FLAG: --runonce="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987602 4814 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987606 4814 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987610 4814 flags.go:64] FLAG: --seccomp-default="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987615 4814 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987620 4814 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987627 4814 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987633 4814 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987639 4814 flags.go:64] FLAG: --storage-driver-password="root" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987644 4814 flags.go:64] FLAG: --storage-driver-secure="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987650 4814 flags.go:64] FLAG: --storage-driver-table="stats" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987655 4814 flags.go:64] FLAG: --storage-driver-user="root" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987660 4814 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987665 4814 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987670 4814 flags.go:64] FLAG: --system-cgroups="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987675 4814 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987683 4814 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987688 4814 flags.go:64] FLAG: --tls-cert-file="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987693 4814 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987701 4814 flags.go:64] FLAG: --tls-min-version="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987705 4814 flags.go:64] FLAG: --tls-private-key-file="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987710 4814 flags.go:64] FLAG: --topology-manager-policy="none" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987715 4814 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987720 4814 flags.go:64] FLAG: --topology-manager-scope="container" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987725 4814 flags.go:64] FLAG: --v="2" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987732 4814 flags.go:64] FLAG: --version="false" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987739 4814 flags.go:64] FLAG: --vmodule="" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987743 4814 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.987748 4814 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987874 4814 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987879 4814 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987884 4814 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987887 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987891 4814 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987895 4814 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987899 4814 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987903 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987906 4814 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987910 4814 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987913 4814 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987917 4814 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987922 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987926 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987929 4814 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987933 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987937 4814 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987940 4814 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987944 4814 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987947 4814 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987951 4814 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987954 4814 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987958 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987961 4814 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987965 4814 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987969 4814 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987972 4814 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987976 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987982 4814 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987986 4814 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987991 4814 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.987997 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988000 4814 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988004 4814 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988009 4814 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988013 4814 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988017 4814 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988020 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988025 4814 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988029 4814 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988033 4814 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988037 4814 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988042 4814 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988046 4814 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988049 4814 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988053 4814 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988057 4814 feature_gate.go:330] unrecognized feature gate: Example Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988061 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988066 4814 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988071 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988075 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988079 4814 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988083 4814 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988087 4814 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988091 4814 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988095 4814 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988099 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988102 4814 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988106 4814 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988113 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988117 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988120 4814 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988124 4814 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988127 4814 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988131 4814 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988134 4814 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988138 4814 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988142 4814 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988145 4814 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988149 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 14:28:57 crc kubenswrapper[4814]: W1202 14:28:57.988152 4814 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 14:28:57 crc kubenswrapper[4814]: I1202 14:28:57.988304 4814 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.005225 4814 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.005432 4814 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.005743 4814 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.005823 4814 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.005852 4814 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.005864 4814 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.005921 4814 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.005932 4814 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.005941 4814 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.005954 4814 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.005970 4814 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006013 4814 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006024 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006034 4814 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006045 4814 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006054 4814 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006063 4814 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006104 4814 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006114 4814 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006126 4814 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006139 4814 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006152 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006188 4814 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006198 4814 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006208 4814 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006218 4814 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006274 4814 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006288 4814 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006301 4814 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006313 4814 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006326 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006378 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006396 4814 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006408 4814 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006419 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006473 4814 feature_gate.go:330] unrecognized feature gate: Example Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006488 4814 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006501 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006511 4814 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006519 4814 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006528 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006571 4814 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006580 4814 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006589 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006598 4814 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006607 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006615 4814 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006659 4814 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006668 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006676 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006685 4814 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006694 4814 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006703 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006711 4814 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006751 4814 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006760 4814 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006769 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006777 4814 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006786 4814 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006794 4814 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006805 4814 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006846 4814 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006857 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006868 4814 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006879 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006888 4814 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006898 4814 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006939 4814 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006948 4814 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006957 4814 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006967 4814 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006976 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.006984 4814 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.007032 4814 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007454 4814 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007480 4814 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007492 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007547 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007559 4814 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007570 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007581 4814 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007590 4814 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007600 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007609 4814 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007617 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007626 4814 feature_gate.go:330] unrecognized feature gate: Example Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007634 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007644 4814 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007654 4814 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007663 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007671 4814 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007680 4814 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007692 4814 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007705 4814 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007715 4814 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007726 4814 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007736 4814 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007780 4814 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007790 4814 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007798 4814 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007806 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007814 4814 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007826 4814 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007837 4814 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007848 4814 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007858 4814 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007868 4814 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007877 4814 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007886 4814 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007894 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007903 4814 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007911 4814 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007919 4814 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007928 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007936 4814 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007945 4814 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007955 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007964 4814 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007972 4814 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007981 4814 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.007992 4814 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008003 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008013 4814 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008022 4814 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008033 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008042 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008050 4814 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008059 4814 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008069 4814 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008078 4814 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008090 4814 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008101 4814 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008111 4814 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008120 4814 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008130 4814 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008166 4814 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008175 4814 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008184 4814 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008193 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008202 4814 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008211 4814 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008220 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008250 4814 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008261 4814 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.008273 4814 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.008289 4814 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.008670 4814 server.go:940] "Client rotation is on, will bootstrap in background" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.014208 4814 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.014394 4814 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.015185 4814 server.go:997] "Starting client certificate rotation" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.015264 4814 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.015710 4814 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-10 21:46:18.864813389 +0000 UTC Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.015847 4814 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.022893 4814 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.025052 4814 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.026770 4814 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.038559 4814 log.go:25] "Validated CRI v1 runtime API" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.059313 4814 log.go:25] "Validated CRI v1 image API" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.061367 4814 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.064408 4814 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-02-14-24-35-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.064584 4814 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.079928 4814 manager.go:217] Machine: {Timestamp:2025-12-02 14:28:58.07868607 +0000 UTC m=+0.203048294 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:be1679fa-26d7-434d-a9d1-fc57de342feb BootID:81575f95-69c9-44e7-9148-cb51ec732398 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:65:7e:0c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:65:7e:0c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:42:20:0d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:6d:63:01 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:16:4c:1c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b5:75:b6 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:12:21:90:68:6c:35 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:9a:9b:10:7e:25:e2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.080173 4814 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.080404 4814 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.081062 4814 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.081276 4814 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.081322 4814 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.081575 4814 topology_manager.go:138] "Creating topology manager with none policy" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.081587 4814 container_manager_linux.go:303] "Creating device plugin manager" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.081766 4814 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.081806 4814 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.082017 4814 state_mem.go:36] "Initialized new in-memory state store" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.082098 4814 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.082701 4814 kubelet.go:418] "Attempting to sync node with API server" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.082733 4814 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.082755 4814 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.082769 4814 kubelet.go:324] "Adding apiserver pod source" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.082781 4814 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.084735 4814 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.085031 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.085120 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.085187 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.085358 4814 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.085353 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.086142 4814 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.086693 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.086714 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.086723 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.086732 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.086746 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.086754 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.086763 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.086776 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.086787 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.086797 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.086809 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.086817 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.087131 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.087518 4814 server.go:1280] "Started kubelet" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.088062 4814 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.088330 4814 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 02 14:28:58 crc systemd[1]: Started Kubernetes Kubelet. Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.089400 4814 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.089823 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.089856 4814 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.089984 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:31:42.02435815 +0000 UTC Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.090083 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 586h2m43.934279256s for next certificate rotation Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.090675 4814 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.090699 4814 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.090750 4814 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.090877 4814 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.090890 4814 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.091130 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="200ms" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.091669 4814 factory.go:55] Registering systemd factory Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.091759 4814 factory.go:221] Registration of the systemd container factory successfully Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.090865 4814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187d6c543ee72dca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 14:28:58.087493066 +0000 UTC m=+0.211855270,LastTimestamp:2025-12-02 14:28:58.087493066 +0000 UTC m=+0.211855270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.092634 4814 factory.go:153] Registering CRI-O factory Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.092930 4814 factory.go:221] Registration of the crio container factory successfully Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.092899 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.092977 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.093013 4814 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.093132 4814 factory.go:103] Registering Raw factory Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.093171 4814 manager.go:1196] Started watching for new ooms in manager Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.093863 4814 server.go:460] "Adding debug handlers to kubelet server" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.094621 4814 manager.go:319] Starting recovery of all containers Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.109952 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110031 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110051 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110070 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110086 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110106 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110123 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110140 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110159 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110177 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110194 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110210 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110249 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110269 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110287 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110323 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110342 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110358 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110374 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110390 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110406 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110422 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110439 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110457 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110475 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110493 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110514 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110534 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110552 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110569 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110586 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110602 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110619 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110687 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110732 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110753 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110769 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110787 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110804 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110821 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110838 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110858 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110873 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110888 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110904 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110917 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110932 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110948 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110966 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.110982 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111000 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111016 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111042 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111060 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111077 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111097 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111114 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111131 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111147 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111164 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111181 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111198 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111216 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111259 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111281 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111299 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111315 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111341 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111360 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111379 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111396 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111416 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111434 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111451 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111467 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111485 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111501 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111518 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111537 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111555 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111577 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111592 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111610 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111626 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111644 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111662 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111678 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111694 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111711 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111726 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111745 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111762 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111778 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111794 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111811 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111827 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111845 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111861 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111878 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111895 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111912 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111930 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111945 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111961 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.111986 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112007 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112027 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112046 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112064 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112085 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112104 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112122 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112144 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112163 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112183 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112201 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112219 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112270 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112289 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112307 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112325 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112343 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112361 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112377 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112393 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112411 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112427 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112444 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112465 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112483 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112497 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112513 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112529 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112570 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112591 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112608 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112624 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112643 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112659 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112678 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112695 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112714 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112731 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112747 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112779 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112799 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112816 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112837 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112857 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112874 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112891 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112908 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112926 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112943 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112962 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112978 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.112995 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115356 4814 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115388 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115402 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115414 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115424 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115433 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115444 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115454 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115463 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115474 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115483 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115494 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115503 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115513 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115522 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115533 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115545 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115556 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115564 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115577 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115587 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115596 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115606 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115617 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115626 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115637 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115647 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115656 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115665 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115677 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115688 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115698 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115708 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115717 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115728 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115738 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115748 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115758 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115770 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115782 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115791 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115801 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115812 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115822 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115831 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115840 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115850 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115859 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115868 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115879 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115890 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115900 4814 reconstruct.go:97] "Volume reconstruction finished" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.115909 4814 reconciler.go:26] "Reconciler: start to sync state" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.118946 4814 manager.go:324] Recovery completed Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.131033 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.134743 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.134784 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.134796 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.135653 4814 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.135672 4814 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.135695 4814 state_mem.go:36] "Initialized new in-memory state store" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.141523 4814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.143317 4814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.143373 4814 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.143410 4814 kubelet.go:2335] "Starting kubelet main sync loop" Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.143473 4814 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.147162 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.147314 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.147373 4814 policy_none.go:49] "None policy: Start" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.150545 4814 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.150591 4814 state_mem.go:35] "Initializing new in-memory state store" Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.192223 4814 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.222405 4814 manager.go:334] "Starting Device Plugin manager" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.222512 4814 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.222526 4814 server.go:79] "Starting device plugin registration server" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.222856 4814 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.222871 4814 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.223223 4814 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.223347 4814 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.223360 4814 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.228412 4814 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.243600 4814 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.243709 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.244592 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.244619 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.244631 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.244788 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.245250 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.245330 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.245577 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.245597 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.245605 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.245673 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.246000 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.246114 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.246444 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.246464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.246473 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.246726 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.246755 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.246767 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.246897 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.247020 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.247045 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.247641 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.247666 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.247676 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.247829 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.247863 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.247879 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.247932 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.247951 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.247963 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.248104 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.248132 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.248154 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.248899 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.248939 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.248953 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.249251 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.249292 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.249534 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.249578 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.249598 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.250064 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.250088 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.250099 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.291936 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="400ms" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.318930 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.318992 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.319016 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.319041 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.319115 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.319174 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.319294 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.319357 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.319393 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.319427 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.319461 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.319530 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.319582 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.319702 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.319771 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.323459 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.324763 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.324867 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.324894 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.324940 4814 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.325842 4814 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.421683 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.421758 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.421796 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.421830 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.421863 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.421895 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.421926 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.421959 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.421968 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422048 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422104 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422064 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422130 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422073 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422157 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.421994 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422138 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422357 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422415 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422476 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422560 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422569 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422522 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422613 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422639 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422661 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422690 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422703 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422753 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.422826 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.526653 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.530940 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.530997 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.531014 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.531043 4814 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.532467 4814 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.570734 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.577744 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.593953 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-fd82456082f04c03ab8daa04e424c2da0cb353f8eea8b442b4a9cecdbe4428fb WatchSource:0}: Error finding container fd82456082f04c03ab8daa04e424c2da0cb353f8eea8b442b4a9cecdbe4428fb: Status 404 returned error can't find the container with id fd82456082f04c03ab8daa04e424c2da0cb353f8eea8b442b4a9cecdbe4428fb Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.595350 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.597865 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6ae3dad6d1b87a07e23751a100566dca37b5338372a90edd8b32aceb7e817820 WatchSource:0}: Error finding container 6ae3dad6d1b87a07e23751a100566dca37b5338372a90edd8b32aceb7e817820: Status 404 returned error can't find the container with id 6ae3dad6d1b87a07e23751a100566dca37b5338372a90edd8b32aceb7e817820 Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.607342 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-277e6e651d8fdd12f9700d3d44fef65805ee2ace418484c2aaf86ba1914e95ce WatchSource:0}: Error finding container 277e6e651d8fdd12f9700d3d44fef65805ee2ace418484c2aaf86ba1914e95ce: Status 404 returned error can't find the container with id 277e6e651d8fdd12f9700d3d44fef65805ee2ace418484c2aaf86ba1914e95ce Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.610477 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.616202 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:28:58 crc kubenswrapper[4814]: W1202 14:28:58.640523 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-325853f2856d82ee56ce4306fab390345ab500dc5735b9f98b5e195f6eb68962 WatchSource:0}: Error finding container 325853f2856d82ee56ce4306fab390345ab500dc5735b9f98b5e195f6eb68962: Status 404 returned error can't find the container with id 325853f2856d82ee56ce4306fab390345ab500dc5735b9f98b5e195f6eb68962 Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.693431 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="800ms" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.932685 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.934675 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.934720 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.934731 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:58 crc kubenswrapper[4814]: I1202 14:28:58.934765 4814 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 14:28:58 crc kubenswrapper[4814]: E1202 14:28:58.935374 4814 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.091841 4814 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.152123 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4" exitCode=0 Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.152213 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4"} Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.152345 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ee4b8df53c33bc552bcd37dfe0ccf1ce98751fb06fd156bef8d0fc18a5c9cd17"} Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.152926 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.155193 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.155260 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.155271 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.158503 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4"} Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.158546 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.158572 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"325853f2856d82ee56ce4306fab390345ab500dc5735b9f98b5e195f6eb68962"} Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.159695 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.159737 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.159748 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.161034 4814 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57" exitCode=0 Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.161107 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57"} Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.161136 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"277e6e651d8fdd12f9700d3d44fef65805ee2ace418484c2aaf86ba1914e95ce"} Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.161540 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.162542 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.162563 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.162577 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.163408 4814 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="05d3282ab63cac018ed36a85a203552ed6a576a5f093e338b48392cd08131883" exitCode=0 Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.163492 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"05d3282ab63cac018ed36a85a203552ed6a576a5f093e338b48392cd08131883"} Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.163559 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6ae3dad6d1b87a07e23751a100566dca37b5338372a90edd8b32aceb7e817820"} Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.163677 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.168956 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.169019 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.169036 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.170931 4814 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728" exitCode=0 Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.170996 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728"} Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.171188 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fd82456082f04c03ab8daa04e424c2da0cb353f8eea8b442b4a9cecdbe4428fb"} Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.171365 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.172495 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.172521 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.172537 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:59 crc kubenswrapper[4814]: W1202 14:28:59.263684 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Dec 02 14:28:59 crc kubenswrapper[4814]: E1202 14:28:59.263776 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Dec 02 14:28:59 crc kubenswrapper[4814]: W1202 14:28:59.323717 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Dec 02 14:28:59 crc kubenswrapper[4814]: E1202 14:28:59.323829 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Dec 02 14:28:59 crc kubenswrapper[4814]: W1202 14:28:59.452016 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Dec 02 14:28:59 crc kubenswrapper[4814]: E1202 14:28:59.452103 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Dec 02 14:28:59 crc kubenswrapper[4814]: E1202 14:28:59.494987 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="1.6s" Dec 02 14:28:59 crc kubenswrapper[4814]: W1202 14:28:59.508496 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Dec 02 14:28:59 crc kubenswrapper[4814]: E1202 14:28:59.508641 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.735772 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.739495 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.739547 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.739557 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:28:59 crc kubenswrapper[4814]: I1202 14:28:59.739585 4814 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.181188 4814 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.186483 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6009cf24f811b4a07cf2ed259ceb0ff6d9a6a286ae7df4ea87d1873864c47100"} Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.187034 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a770612ae67e1ad2a03714776995463980ed3dd8e46434adac446ec17495b21e"} Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.187052 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3085bff468e4e77202bec19f8991e1747d96a401ad61a47c7f5400dc210e9dd5"} Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.186824 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.188164 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.188206 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.188214 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.192430 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a"} Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.192474 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948"} Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.192487 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95"} Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.192502 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6"} Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.197677 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467"} Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.197705 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0"} Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.197716 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3"} Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.197812 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.198506 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.198536 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.198546 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.200248 4814 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a" exitCode=0 Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.200295 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a"} Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.200378 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.200896 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.200915 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.200925 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.203428 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"921bb8cb7d4562e290648eb6e0f3eed3c7c8c12ad094d68cdf26d81b2ce9e693"} Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.203511 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.204206 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.204247 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.204260 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:00 crc kubenswrapper[4814]: I1202 14:29:00.227012 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.210685 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67"} Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.210793 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.211970 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.212013 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.212032 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.213552 4814 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4" exitCode=0 Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.213653 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4"} Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.213770 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.213807 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.213780 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.214888 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.214934 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.214952 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.215467 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.215502 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.215469 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.215511 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.215535 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:01 crc kubenswrapper[4814]: I1202 14:29:01.215601 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:02 crc kubenswrapper[4814]: I1202 14:29:02.220578 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285"} Dec 02 14:29:02 crc kubenswrapper[4814]: I1202 14:29:02.220631 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:29:02 crc kubenswrapper[4814]: I1202 14:29:02.220651 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a"} Dec 02 14:29:02 crc kubenswrapper[4814]: I1202 14:29:02.220668 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:02 crc kubenswrapper[4814]: I1202 14:29:02.220675 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190"} Dec 02 14:29:02 crc kubenswrapper[4814]: I1202 14:29:02.220693 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f"} Dec 02 14:29:02 crc kubenswrapper[4814]: I1202 14:29:02.220660 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:02 crc kubenswrapper[4814]: I1202 14:29:02.221983 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:02 crc kubenswrapper[4814]: I1202 14:29:02.222016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:02 crc kubenswrapper[4814]: I1202 14:29:02.222025 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:02 crc kubenswrapper[4814]: I1202 14:29:02.221986 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:02 crc kubenswrapper[4814]: I1202 14:29:02.222125 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:02 crc kubenswrapper[4814]: I1202 14:29:02.222142 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.109295 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.109547 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.111032 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.111096 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.111108 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.228906 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097"} Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.229120 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.230105 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.230148 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.230159 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.277691 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.277864 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.278964 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.278988 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.278997 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.352929 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.358860 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.586615 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.586789 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.586840 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.588475 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.588523 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:03 crc kubenswrapper[4814]: I1202 14:29:03.588536 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:04 crc kubenswrapper[4814]: I1202 14:29:04.231322 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:04 crc kubenswrapper[4814]: I1202 14:29:04.231322 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:04 crc kubenswrapper[4814]: I1202 14:29:04.232312 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:04 crc kubenswrapper[4814]: I1202 14:29:04.232344 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:04 crc kubenswrapper[4814]: I1202 14:29:04.232358 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:04 crc kubenswrapper[4814]: I1202 14:29:04.232312 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:04 crc kubenswrapper[4814]: I1202 14:29:04.232418 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:04 crc kubenswrapper[4814]: I1202 14:29:04.232428 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.232416 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.232646 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.232705 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.234174 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.234535 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.234559 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.236052 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.236120 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.237171 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.237207 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.237219 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.241719 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.244545 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.246849 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.246906 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.246923 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:05 crc kubenswrapper[4814]: I1202 14:29:05.298994 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 02 14:29:06 crc kubenswrapper[4814]: I1202 14:29:06.109861 4814 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 14:29:06 crc kubenswrapper[4814]: I1202 14:29:06.109957 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:29:06 crc kubenswrapper[4814]: I1202 14:29:06.238149 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:06 crc kubenswrapper[4814]: I1202 14:29:06.239553 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:06 crc kubenswrapper[4814]: I1202 14:29:06.239614 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:06 crc kubenswrapper[4814]: I1202 14:29:06.239624 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:06 crc kubenswrapper[4814]: I1202 14:29:06.490036 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:29:06 crc kubenswrapper[4814]: I1202 14:29:06.490376 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:06 crc kubenswrapper[4814]: I1202 14:29:06.491762 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:06 crc kubenswrapper[4814]: I1202 14:29:06.491823 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:06 crc kubenswrapper[4814]: I1202 14:29:06.491834 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:08 crc kubenswrapper[4814]: I1202 14:29:08.075606 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:29:08 crc kubenswrapper[4814]: I1202 14:29:08.076022 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:08 crc kubenswrapper[4814]: I1202 14:29:08.077994 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:08 crc kubenswrapper[4814]: I1202 14:29:08.078057 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:08 crc kubenswrapper[4814]: I1202 14:29:08.078068 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:08 crc kubenswrapper[4814]: E1202 14:29:08.229256 4814 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 02 14:29:09 crc kubenswrapper[4814]: E1202 14:29:09.740575 4814 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 02 14:29:10 crc kubenswrapper[4814]: I1202 14:29:10.094383 4814 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 02 14:29:10 crc kubenswrapper[4814]: E1202 14:29:10.183348 4814 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 02 14:29:10 crc kubenswrapper[4814]: I1202 14:29:10.261050 4814 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 02 14:29:10 crc kubenswrapper[4814]: I1202 14:29:10.261188 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 02 14:29:10 crc kubenswrapper[4814]: I1202 14:29:10.277684 4814 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 02 14:29:10 crc kubenswrapper[4814]: I1202 14:29:10.277781 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 02 14:29:11 crc kubenswrapper[4814]: I1202 14:29:11.341513 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:11 crc kubenswrapper[4814]: I1202 14:29:11.343130 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:11 crc kubenswrapper[4814]: I1202 14:29:11.343173 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:11 crc kubenswrapper[4814]: I1202 14:29:11.343187 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:11 crc kubenswrapper[4814]: I1202 14:29:11.343217 4814 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 14:29:14 crc kubenswrapper[4814]: I1202 14:29:14.215533 4814 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 02 14:29:14 crc kubenswrapper[4814]: I1202 14:29:14.229725 4814 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.241355 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.241615 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.242750 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.242792 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.242805 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.243280 4814 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.243371 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.246803 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.262730 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.263304 4814 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.263394 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.264724 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.264915 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.265056 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.278628 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.278816 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.279937 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.279988 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.280004 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:15 crc kubenswrapper[4814]: E1202 14:29:15.279934 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.281865 4814 trace.go:236] Trace[1167417668]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 14:29:01.978) (total time: 13302ms): Dec 02 14:29:15 crc kubenswrapper[4814]: Trace[1167417668]: ---"Objects listed" error: 13302ms (14:29:15.281) Dec 02 14:29:15 crc kubenswrapper[4814]: Trace[1167417668]: [13.302997576s] [13.302997576s] END Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.281888 4814 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.283712 4814 trace.go:236] Trace[1479359700]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 14:29:01.271) (total time: 14011ms): Dec 02 14:29:15 crc kubenswrapper[4814]: Trace[1479359700]: ---"Objects listed" error: 14011ms (14:29:15.283) Dec 02 14:29:15 crc kubenswrapper[4814]: Trace[1479359700]: [14.011737072s] [14.011737072s] END Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.283756 4814 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.283838 4814 trace.go:236] Trace[2065648665]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 14:29:01.703) (total time: 13579ms): Dec 02 14:29:15 crc kubenswrapper[4814]: Trace[2065648665]: ---"Objects listed" error: 13579ms (14:29:15.283) Dec 02 14:29:15 crc kubenswrapper[4814]: Trace[2065648665]: [13.579853206s] [13.579853206s] END Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.283869 4814 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.285093 4814 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.285621 4814 trace.go:236] Trace[1300412089]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 14:29:01.162) (total time: 14122ms): Dec 02 14:29:15 crc kubenswrapper[4814]: Trace[1300412089]: ---"Objects listed" error: 14122ms (14:29:15.285) Dec 02 14:29:15 crc kubenswrapper[4814]: Trace[1300412089]: [14.122570144s] [14.122570144s] END Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.285657 4814 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 02 14:29:15 crc kubenswrapper[4814]: I1202 14:29:15.295805 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.092264 4814 apiserver.go:52] "Watching apiserver" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.096432 4814 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.096867 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.097458 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.097709 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.098026 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.097930 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.098347 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.098477 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.097956 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.098741 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.098485 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.100027 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.100059 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.100542 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.100805 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.100957 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.100983 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.103960 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.103962 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.104628 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.109806 4814 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.109864 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.126388 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.142838 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.160590 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.171413 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.180436 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.189952 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.191378 4814 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.202017 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.266959 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.269497 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67" exitCode=255 Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.269602 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67"} Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.279183 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.279617 4814 scope.go:117] "RemoveContainer" containerID="077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.285063 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.285651 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.296327 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.296371 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.296392 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.296411 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.296513 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.296698 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.296934 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.297059 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.297134 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.297258 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.297404 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.297420 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.297508 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.297573 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.297654 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.297711 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.297733 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.297755 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.297835 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298028 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.298059 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:29:16.79802938 +0000 UTC m=+18.922391774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298096 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298212 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298274 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298303 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298334 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298337 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298309 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298370 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298412 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298432 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298447 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298464 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298483 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298498 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298517 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298534 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298553 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298573 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298591 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298611 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298672 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298700 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298726 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298767 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298773 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298948 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.298846 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299012 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299044 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299067 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299091 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299116 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299165 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299190 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299215 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299255 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299282 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299306 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299329 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299387 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299416 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299440 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299468 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299493 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299520 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299546 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299572 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299597 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299620 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299645 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299669 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299694 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299722 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299748 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299806 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299836 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299893 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299923 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299949 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299972 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300001 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300026 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300053 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300078 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300112 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300136 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300160 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300183 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300209 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300257 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300284 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300307 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300330 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300378 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300413 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300438 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300465 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300487 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300510 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300537 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300561 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300583 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300607 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300633 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300656 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300678 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300704 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300727 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300751 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300774 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300798 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300823 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300851 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300876 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300899 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300925 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300955 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300982 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301007 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301030 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301053 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301079 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301103 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301128 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301152 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301176 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301199 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301249 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301280 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301305 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301332 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301354 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301375 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301397 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301421 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301445 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301466 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301491 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301514 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301537 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301558 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301577 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301602 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301627 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301652 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301677 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301701 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301735 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301758 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301782 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301805 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301828 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301851 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301872 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301894 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301914 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301934 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301960 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301927 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301983 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302007 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302029 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302050 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302079 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302115 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302139 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302160 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302181 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302204 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302245 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302271 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302309 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302333 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302357 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302382 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302406 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302431 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302460 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302484 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302510 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302533 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302553 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302574 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302596 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302619 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302643 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302666 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302691 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302714 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302738 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302758 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302780 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302803 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302826 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302853 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302876 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302898 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302925 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302948 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302973 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302996 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.303020 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.303045 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.303069 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.303095 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.303116 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.303155 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.303179 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.303202 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.303266 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.303297 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.303326 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304022 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304054 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304086 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304112 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304143 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304172 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304203 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304253 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304282 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304309 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304336 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304378 4814 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304397 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304414 4814 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304427 4814 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304440 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304453 4814 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304465 4814 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304479 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304493 4814 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304506 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304518 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304531 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304542 4814 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299328 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.310575 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299609 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299779 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.299783 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300063 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300365 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300375 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300585 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300802 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300897 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.300992 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301102 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301271 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.301565 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302551 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302481 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302618 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302843 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.302950 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.303000 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.303279 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.303624 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304159 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304514 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304554 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.304691 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.305040 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.305402 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.305415 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.305452 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.305466 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.305475 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.305624 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.305963 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.305985 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.306033 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.306066 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.306078 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.306676 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.306962 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.307333 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.307336 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.308611 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.309118 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.309322 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.309917 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.309953 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.309975 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.310032 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.310816 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.311131 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.311459 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.311455 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.311484 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.311010 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.311014 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.311874 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.311830 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.312431 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.312617 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.312899 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.313283 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.313330 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.313950 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.314582 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.314739 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.314891 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.315380 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.316555 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.316674 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.310532 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.316936 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.316998 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.317147 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.317717 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.318483 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.318564 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.318587 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.318630 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.318921 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.319441 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.319590 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.319664 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.319839 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.319988 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.320354 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.320433 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.320915 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.321079 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.321147 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.321254 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.321312 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.321379 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.321408 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.320971 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.322020 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.322051 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.322124 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.321837 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.322166 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.322843 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.322920 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.323010 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.323943 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.323962 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.324051 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.324469 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.324792 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.324869 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.325404 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.325659 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.328777 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.328788 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.328818 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.328916 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.329112 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.329397 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.329569 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.329669 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.329871 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.330291 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.331023 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.331338 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.331533 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.331617 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.331552 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.331812 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.332100 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.316958 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.332297 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.331546 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.333716 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.333784 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.333919 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.334157 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.334310 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.334343 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.334358 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.334428 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.334448 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:16.834417002 +0000 UTC m=+18.958779206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.334696 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.334716 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.334910 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.334953 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.332455 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.335009 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.335048 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.336170 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.336323 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.336668 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.336713 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.336790 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.336723 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.336818 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.336743 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.336961 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.337063 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.337353 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.337474 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.337650 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.338032 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.338537 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.338880 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.339118 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.339274 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.339287 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.339393 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.339469 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.340178 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.340351 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.340597 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.342704 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.345698 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.346526 4814 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.346558 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.347154 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.347637 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.347755 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.348302 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.348441 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:16.848393192 +0000 UTC m=+18.972755576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.349294 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.349566 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.350242 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.350333 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:16.85030693 +0000 UTC m=+18.974669134 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.351977 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.352035 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.352599 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.352945 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.357213 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.358109 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.359462 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.361881 4814 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.361962 4814 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.363376 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.365404 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.366763 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.366918 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.366954 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.366963 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.366979 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.366991 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:16Z","lastTransitionTime":"2025-12-02T14:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.367646 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.369544 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.370366 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.370472 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.370567 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.370698 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:16.870676421 +0000 UTC m=+18.995038625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.370473 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.372298 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.378851 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.382354 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.391959 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.393814 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.396508 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.396554 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.396567 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.396584 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.396595 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:16Z","lastTransitionTime":"2025-12-02T14:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.406594 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.406692 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.406769 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.406979 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407040 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407057 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407069 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407079 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407092 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407102 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407111 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407120 4814 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407128 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407137 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407150 4814 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407159 4814 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407168 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407178 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407187 4814 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407196 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407204 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407214 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407223 4814 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407250 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407261 4814 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407270 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407280 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407317 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407329 4814 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407338 4814 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407346 4814 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407355 4814 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407364 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407374 4814 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407382 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407391 4814 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407401 4814 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407409 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407418 4814 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407428 4814 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407440 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407449 4814 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407459 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407467 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407476 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407484 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407492 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407501 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407509 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407519 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407528 4814 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407550 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407559 4814 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407568 4814 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407577 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407585 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407594 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407602 4814 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407611 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407619 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407627 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407635 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407644 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407652 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407661 4814 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407670 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407678 4814 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407687 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407695 4814 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407703 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407712 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407720 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407729 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407737 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407746 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407754 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407763 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407771 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407780 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407788 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407797 4814 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407807 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407816 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407824 4814 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407833 4814 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407843 4814 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407852 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407860 4814 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407868 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407877 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407886 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407901 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407911 4814 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407919 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407928 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407936 4814 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407947 4814 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407955 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407965 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.407975 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408003 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408012 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408021 4814 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408033 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408042 4814 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408052 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408064 4814 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408075 4814 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408086 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408098 4814 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408109 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408119 4814 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408131 4814 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408140 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408149 4814 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408160 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408171 4814 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408182 4814 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408193 4814 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408203 4814 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408297 4814 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408305 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408313 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408325 4814 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408333 4814 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408341 4814 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408371 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408379 4814 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408388 4814 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408396 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408405 4814 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408413 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408424 4814 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408434 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408442 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408451 4814 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408460 4814 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408469 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408477 4814 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408486 4814 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408495 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408504 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408513 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408521 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408544 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408554 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408563 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408571 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408581 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408589 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408598 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408608 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408616 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408624 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408633 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408643 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408651 4814 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408666 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408811 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408823 4814 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408831 4814 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.408999 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.409007 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411138 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411168 4814 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411181 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411192 4814 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411204 4814 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411213 4814 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411249 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411262 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411272 4814 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411282 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411294 4814 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411342 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411352 4814 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411363 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411374 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411383 4814 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411393 4814 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411403 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411413 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411426 4814 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411436 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.411449 4814 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.409497 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.413448 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.415501 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.415536 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.415547 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.415568 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.415579 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:16Z","lastTransitionTime":"2025-12-02T14:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.421551 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.426697 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.428400 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 14:29:16 crc kubenswrapper[4814]: W1202 14:29:16.430948 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-d280d0c840b4030e7c1c981ef47789d01486961085d14abc554130f2dc1d0b1e WatchSource:0}: Error finding container d280d0c840b4030e7c1c981ef47789d01486961085d14abc554130f2dc1d0b1e: Status 404 returned error can't find the container with id d280d0c840b4030e7c1c981ef47789d01486961085d14abc554130f2dc1d0b1e Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.431185 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.431210 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.431218 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.431246 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.431257 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:16Z","lastTransitionTime":"2025-12-02T14:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:16 crc kubenswrapper[4814]: W1202 14:29:16.431600 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-4bbc60580da426d2669667748c5ce053323ba4ff4353ebe38dd0b0954b46e297 WatchSource:0}: Error finding container 4bbc60580da426d2669667748c5ce053323ba4ff4353ebe38dd0b0954b46e297: Status 404 returned error can't find the container with id 4bbc60580da426d2669667748c5ce053323ba4ff4353ebe38dd0b0954b46e297 Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.441702 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.448135 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.448182 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.448192 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.448212 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.448223 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:16Z","lastTransitionTime":"2025-12-02T14:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.458324 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.458489 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.460471 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.460643 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.460776 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.460920 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.461017 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:16Z","lastTransitionTime":"2025-12-02T14:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.472065 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.494905 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.504704 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.510991 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.525993 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.536064 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.551101 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.564115 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.564167 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.564181 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.564203 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.564217 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:16Z","lastTransitionTime":"2025-12-02T14:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.570363 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.621643 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.644248 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.654024 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.666842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.666865 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.666873 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.666887 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.666897 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:16Z","lastTransitionTime":"2025-12-02T14:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.769445 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.769494 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.769505 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.769527 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.769540 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:16Z","lastTransitionTime":"2025-12-02T14:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.815037 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.815304 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:29:17.815268402 +0000 UTC m=+19.939630606 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.872625 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.873017 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.873112 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.873204 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.873314 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:16Z","lastTransitionTime":"2025-12-02T14:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.916487 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.916568 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.916601 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.916626 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.916788 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.916856 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:17.916837717 +0000 UTC m=+20.041199921 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.917261 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.917314 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.917360 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.917377 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.917394 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:17.91737172 +0000 UTC m=+20.041733914 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.917447 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:17.917423381 +0000 UTC m=+20.041785735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.917615 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.917710 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.917785 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:16 crc kubenswrapper[4814]: E1202 14:29:16.917898 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:17.917885713 +0000 UTC m=+20.042248097 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.975942 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.976317 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.976463 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.976584 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:16 crc kubenswrapper[4814]: I1202 14:29:16.976666 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:16Z","lastTransitionTime":"2025-12-02T14:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.087502 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.087553 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.087565 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.087582 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.087594 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:17Z","lastTransitionTime":"2025-12-02T14:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.190812 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.190860 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.190870 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.190888 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.190898 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:17Z","lastTransitionTime":"2025-12-02T14:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.274274 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.276899 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.277128 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.279688 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.279767 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.279794 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8e3fd63beb02e31831cf25502e444f09f458965ba84fd182ba2d02fe03e6bf0e"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.280622 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4bbc60580da426d2669667748c5ce053323ba4ff4353ebe38dd0b0954b46e297"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.282255 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.282286 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d280d0c840b4030e7c1c981ef47789d01486961085d14abc554130f2dc1d0b1e"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.293308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.293352 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.293361 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.293377 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.293388 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:17Z","lastTransitionTime":"2025-12-02T14:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.313834 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.332288 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.352948 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.373851 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.393317 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.395470 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.395533 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.395544 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.395564 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.395576 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:17Z","lastTransitionTime":"2025-12-02T14:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.409557 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.423313 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.437537 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.450700 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.465260 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.479556 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.497810 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.497872 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.497884 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.497902 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.497913 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:17Z","lastTransitionTime":"2025-12-02T14:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.498701 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.515824 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.531962 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.544204 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.558459 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.576685 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.589273 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:17Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.600626 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.600685 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.600698 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.600723 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.600738 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:17Z","lastTransitionTime":"2025-12-02T14:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.703426 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.703485 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.703501 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.703527 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.703543 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:17Z","lastTransitionTime":"2025-12-02T14:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.805998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.806039 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.806051 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.806071 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.806081 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:17Z","lastTransitionTime":"2025-12-02T14:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.824704 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:29:17 crc kubenswrapper[4814]: E1202 14:29:17.824916 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:29:19.82488567 +0000 UTC m=+21.949247884 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.908988 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.909029 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.909038 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.909074 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.909084 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:17Z","lastTransitionTime":"2025-12-02T14:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.926027 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.926109 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.926148 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:17 crc kubenswrapper[4814]: I1202 14:29:17.926177 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:17 crc kubenswrapper[4814]: E1202 14:29:17.926351 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:29:17 crc kubenswrapper[4814]: E1202 14:29:17.926383 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:29:17 crc kubenswrapper[4814]: E1202 14:29:17.926390 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:29:17 crc kubenswrapper[4814]: E1202 14:29:17.926443 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:29:17 crc kubenswrapper[4814]: E1202 14:29:17.926410 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:29:17 crc kubenswrapper[4814]: E1202 14:29:17.926466 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:17 crc kubenswrapper[4814]: E1202 14:29:17.926487 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:29:17 crc kubenswrapper[4814]: E1202 14:29:17.926507 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:17 crc kubenswrapper[4814]: E1202 14:29:17.926468 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:19.926437524 +0000 UTC m=+22.050799728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:29:17 crc kubenswrapper[4814]: E1202 14:29:17.926588 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:19.926547737 +0000 UTC m=+22.050909981 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:29:17 crc kubenswrapper[4814]: E1202 14:29:17.926620 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:19.926604598 +0000 UTC m=+22.050966842 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:17 crc kubenswrapper[4814]: E1202 14:29:17.926657 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:19.926645219 +0000 UTC m=+22.051007463 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.011886 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.011932 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.011944 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.011964 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.011976 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:18Z","lastTransitionTime":"2025-12-02T14:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.114068 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.114129 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.114141 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.114158 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.114170 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:18Z","lastTransitionTime":"2025-12-02T14:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.144640 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.144701 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.144639 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:18 crc kubenswrapper[4814]: E1202 14:29:18.144781 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:18 crc kubenswrapper[4814]: E1202 14:29:18.144855 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:18 crc kubenswrapper[4814]: E1202 14:29:18.144947 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.149831 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.150598 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.152521 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.154490 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.156640 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.157538 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.158392 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.159448 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.160183 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.162371 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.162760 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.163137 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.164417 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.165188 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.165947 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.166909 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.167907 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.168748 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.169339 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.170184 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.171034 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.171697 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.174702 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.175348 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.176174 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.176965 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.177690 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.179518 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.180029 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.180636 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.180716 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.181092 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.181578 4814 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.181712 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.183093 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.183605 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.184001 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.185312 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.185953 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.186501 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.187115 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.187786 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.191308 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.191938 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.192936 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.193839 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.194728 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.195649 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.196496 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.197568 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.198263 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.198325 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.198944 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.199779 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.200562 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.201384 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.202187 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.215476 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.217117 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.217142 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.217150 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.217164 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.217173 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:18Z","lastTransitionTime":"2025-12-02T14:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.233439 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.259402 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.269784 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.280492 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.292063 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.320335 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.320408 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.320423 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.320448 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.320463 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:18Z","lastTransitionTime":"2025-12-02T14:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.422791 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.422838 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.422847 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.422865 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.422875 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:18Z","lastTransitionTime":"2025-12-02T14:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.525193 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.525294 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.525308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.525331 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.525343 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:18Z","lastTransitionTime":"2025-12-02T14:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.628450 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.628500 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.628514 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.628530 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.628542 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:18Z","lastTransitionTime":"2025-12-02T14:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.731051 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.731093 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.731101 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.731117 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.731126 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:18Z","lastTransitionTime":"2025-12-02T14:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.833456 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.833510 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.833522 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.833540 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.833556 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:18Z","lastTransitionTime":"2025-12-02T14:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.936486 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.936549 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.936564 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.936586 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:18 crc kubenswrapper[4814]: I1202 14:29:18.936599 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:18Z","lastTransitionTime":"2025-12-02T14:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.038552 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.038591 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.038602 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.038619 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.038629 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:19Z","lastTransitionTime":"2025-12-02T14:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.141133 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.141206 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.141251 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.141276 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.141292 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:19Z","lastTransitionTime":"2025-12-02T14:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.243144 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.243182 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.243191 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.243205 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.243214 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:19Z","lastTransitionTime":"2025-12-02T14:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.289925 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370"} Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.304061 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:19Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.314519 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:19Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.325154 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:19Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.336616 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:19Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.345117 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.345153 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.345163 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.345180 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.345189 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:19Z","lastTransitionTime":"2025-12-02T14:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.349837 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:19Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.369141 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:19Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.406278 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:19Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.420530 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:19Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.433371 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:19Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.447106 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.447147 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.447159 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.447176 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.447188 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:19Z","lastTransitionTime":"2025-12-02T14:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.550521 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.550575 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.550586 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.550604 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.550614 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:19Z","lastTransitionTime":"2025-12-02T14:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.654205 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.654315 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.654342 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.654370 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.654389 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:19Z","lastTransitionTime":"2025-12-02T14:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.757614 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.757665 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.757681 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.757702 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.757714 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:19Z","lastTransitionTime":"2025-12-02T14:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.841451 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:29:19 crc kubenswrapper[4814]: E1202 14:29:19.841555 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:29:23.841533202 +0000 UTC m=+25.965895406 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.859891 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.859928 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.859937 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.859951 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.859959 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:19Z","lastTransitionTime":"2025-12-02T14:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.942305 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.942391 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.942413 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.942435 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:19 crc kubenswrapper[4814]: E1202 14:29:19.942481 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:29:19 crc kubenswrapper[4814]: E1202 14:29:19.942519 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:29:19 crc kubenswrapper[4814]: E1202 14:29:19.942533 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:19 crc kubenswrapper[4814]: E1202 14:29:19.942552 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:29:19 crc kubenswrapper[4814]: E1202 14:29:19.942589 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:29:19 crc kubenswrapper[4814]: E1202 14:29:19.942600 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:23.942577814 +0000 UTC m=+26.066940018 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:19 crc kubenswrapper[4814]: E1202 14:29:19.942734 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:23.942712427 +0000 UTC m=+26.067074631 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:29:19 crc kubenswrapper[4814]: E1202 14:29:19.942553 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:29:19 crc kubenswrapper[4814]: E1202 14:29:19.942757 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:23.942750378 +0000 UTC m=+26.067112582 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:29:19 crc kubenswrapper[4814]: E1202 14:29:19.942784 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:29:19 crc kubenswrapper[4814]: E1202 14:29:19.942801 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:19 crc kubenswrapper[4814]: E1202 14:29:19.943062 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:23.942943933 +0000 UTC m=+26.067306127 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.962484 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.962523 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.962533 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.962549 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:19 crc kubenswrapper[4814]: I1202 14:29:19.962564 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:19Z","lastTransitionTime":"2025-12-02T14:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.064720 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.064819 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.064832 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.064849 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.064862 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:20Z","lastTransitionTime":"2025-12-02T14:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.143831 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.143876 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.143849 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:20 crc kubenswrapper[4814]: E1202 14:29:20.144110 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:20 crc kubenswrapper[4814]: E1202 14:29:20.144286 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:20 crc kubenswrapper[4814]: E1202 14:29:20.144306 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.166880 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.166926 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.166936 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.166952 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.166963 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:20Z","lastTransitionTime":"2025-12-02T14:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.269336 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.269397 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.269410 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.269431 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.269444 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:20Z","lastTransitionTime":"2025-12-02T14:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.371498 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.371570 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.371591 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.371613 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.371634 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:20Z","lastTransitionTime":"2025-12-02T14:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.473908 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.473968 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.473979 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.473997 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.474008 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:20Z","lastTransitionTime":"2025-12-02T14:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.577075 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.577134 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.577144 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.577165 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.577176 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:20Z","lastTransitionTime":"2025-12-02T14:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.679856 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.679957 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.679979 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.680034 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.680053 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:20Z","lastTransitionTime":"2025-12-02T14:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.783549 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.783600 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.783613 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.783634 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.783646 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:20Z","lastTransitionTime":"2025-12-02T14:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.886253 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.886315 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.886333 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.886358 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.886377 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:20Z","lastTransitionTime":"2025-12-02T14:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.988897 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.988980 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.988998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.989020 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:20 crc kubenswrapper[4814]: I1202 14:29:20.989032 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:20Z","lastTransitionTime":"2025-12-02T14:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.091925 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.092008 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.092019 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.092072 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.092085 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:21Z","lastTransitionTime":"2025-12-02T14:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.194823 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.194881 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.194891 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.194908 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.194919 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:21Z","lastTransitionTime":"2025-12-02T14:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.296570 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.296619 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.296632 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.296650 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.296661 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:21Z","lastTransitionTime":"2025-12-02T14:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.396540 4814 csr.go:261] certificate signing request csr-mq9k6 is approved, waiting to be issued Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.399609 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.399655 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.399666 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.399683 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.399693 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:21Z","lastTransitionTime":"2025-12-02T14:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.416135 4814 csr.go:257] certificate signing request csr-mq9k6 is issued Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.502308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.502363 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.502375 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.502396 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.502415 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:21Z","lastTransitionTime":"2025-12-02T14:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.604748 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.604810 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.604825 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.604854 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.604867 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:21Z","lastTransitionTime":"2025-12-02T14:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.707202 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.707261 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.707271 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.707288 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.707298 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:21Z","lastTransitionTime":"2025-12-02T14:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.780291 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bfz4v"] Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.780619 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bfz4v" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.782250 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.782945 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.785077 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.801066 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.809531 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.809584 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.809596 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.809619 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.809633 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:21Z","lastTransitionTime":"2025-12-02T14:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.817792 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.831514 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.843661 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.857039 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.861223 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbl4l\" (UniqueName: \"kubernetes.io/projected/ddebe70e-e835-47d3-b46f-1177bc066d81-kube-api-access-rbl4l\") pod \"node-resolver-bfz4v\" (UID: \"ddebe70e-e835-47d3-b46f-1177bc066d81\") " pod="openshift-dns/node-resolver-bfz4v" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.861337 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ddebe70e-e835-47d3-b46f-1177bc066d81-hosts-file\") pod \"node-resolver-bfz4v\" (UID: \"ddebe70e-e835-47d3-b46f-1177bc066d81\") " pod="openshift-dns/node-resolver-bfz4v" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.871276 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.886507 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.903576 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.911827 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.911919 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.911941 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.911971 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.911991 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:21Z","lastTransitionTime":"2025-12-02T14:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.916590 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.929165 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.962788 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ddebe70e-e835-47d3-b46f-1177bc066d81-hosts-file\") pod \"node-resolver-bfz4v\" (UID: \"ddebe70e-e835-47d3-b46f-1177bc066d81\") " pod="openshift-dns/node-resolver-bfz4v" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.962861 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbl4l\" (UniqueName: \"kubernetes.io/projected/ddebe70e-e835-47d3-b46f-1177bc066d81-kube-api-access-rbl4l\") pod \"node-resolver-bfz4v\" (UID: \"ddebe70e-e835-47d3-b46f-1177bc066d81\") " pod="openshift-dns/node-resolver-bfz4v" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.962972 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ddebe70e-e835-47d3-b46f-1177bc066d81-hosts-file\") pod \"node-resolver-bfz4v\" (UID: \"ddebe70e-e835-47d3-b46f-1177bc066d81\") " pod="openshift-dns/node-resolver-bfz4v" Dec 02 14:29:21 crc kubenswrapper[4814]: I1202 14:29:21.982860 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbl4l\" (UniqueName: \"kubernetes.io/projected/ddebe70e-e835-47d3-b46f-1177bc066d81-kube-api-access-rbl4l\") pod \"node-resolver-bfz4v\" (UID: \"ddebe70e-e835-47d3-b46f-1177bc066d81\") " pod="openshift-dns/node-resolver-bfz4v" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.014631 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.014680 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.014691 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.014711 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.014725 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:22Z","lastTransitionTime":"2025-12-02T14:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.091787 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bfz4v" Dec 02 14:29:22 crc kubenswrapper[4814]: W1202 14:29:22.105262 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddebe70e_e835_47d3_b46f_1177bc066d81.slice/crio-36a7644f75b79c433fa601afd0b5176dd67ff57d27639b3c3ffd14a24756f50b WatchSource:0}: Error finding container 36a7644f75b79c433fa601afd0b5176dd67ff57d27639b3c3ffd14a24756f50b: Status 404 returned error can't find the container with id 36a7644f75b79c433fa601afd0b5176dd67ff57d27639b3c3ffd14a24756f50b Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.120453 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.120503 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.120515 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.120540 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.120550 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:22Z","lastTransitionTime":"2025-12-02T14:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.143917 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.144001 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.144112 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:22 crc kubenswrapper[4814]: E1202 14:29:22.144182 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:22 crc kubenswrapper[4814]: E1202 14:29:22.144384 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:22 crc kubenswrapper[4814]: E1202 14:29:22.144605 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.158586 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-cd6tj"] Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.158951 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.158998 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-k6qjb"] Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.159896 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.163425 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.163480 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.163552 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-k4kc8"] Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.163579 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.163669 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.163835 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.163888 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.163968 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.163998 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.166883 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.172902 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.172997 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.172902 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.173268 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.198103 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.222789 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.227654 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.227696 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.227707 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.227729 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.227742 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:22Z","lastTransitionTime":"2025-12-02T14:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.244532 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.260268 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264609 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3acbdd1e-1a6f-407f-be68-b570b5fde941-os-release\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264642 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3acbdd1e-1a6f-407f-be68-b570b5fde941-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264671 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/733ab60d-6038-4a5b-9120-9c4ef1f47774-cni-binary-copy\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264687 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp6gp\" (UniqueName: \"kubernetes.io/projected/733ab60d-6038-4a5b-9120-9c4ef1f47774-kube-api-access-bp6gp\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264706 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-system-cni-dir\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264722 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmdbm\" (UniqueName: \"kubernetes.io/projected/d93bbd5a-650e-4f70-9b1c-868df83a667c-kube-api-access-pmdbm\") pod \"machine-config-daemon-k4kc8\" (UID: \"d93bbd5a-650e-4f70-9b1c-868df83a667c\") " pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264740 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-cnibin\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264755 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-os-release\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264773 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d93bbd5a-650e-4f70-9b1c-868df83a667c-proxy-tls\") pod \"machine-config-daemon-k4kc8\" (UID: \"d93bbd5a-650e-4f70-9b1c-868df83a667c\") " pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264790 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-var-lib-cni-multus\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264821 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d93bbd5a-650e-4f70-9b1c-868df83a667c-mcd-auth-proxy-config\") pod \"machine-config-daemon-k4kc8\" (UID: \"d93bbd5a-650e-4f70-9b1c-868df83a667c\") " pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264837 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3acbdd1e-1a6f-407f-be68-b570b5fde941-cni-binary-copy\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264852 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-var-lib-cni-bin\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264876 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-multus-cni-dir\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264890 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-var-lib-kubelet\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264905 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-etc-kubernetes\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264921 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-multus-socket-dir-parent\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264936 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-hostroot\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264953 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-multus-conf-dir\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264972 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-run-netns\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.264990 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-run-multus-certs\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.265006 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d93bbd5a-650e-4f70-9b1c-868df83a667c-rootfs\") pod \"machine-config-daemon-k4kc8\" (UID: \"d93bbd5a-650e-4f70-9b1c-868df83a667c\") " pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.265029 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/733ab60d-6038-4a5b-9120-9c4ef1f47774-multus-daemon-config\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.265053 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3acbdd1e-1a6f-407f-be68-b570b5fde941-cnibin\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.265070 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrf9m\" (UniqueName: \"kubernetes.io/projected/3acbdd1e-1a6f-407f-be68-b570b5fde941-kube-api-access-rrf9m\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.265093 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-run-k8s-cni-cncf-io\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.265115 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3acbdd1e-1a6f-407f-be68-b570b5fde941-system-cni-dir\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.265160 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3acbdd1e-1a6f-407f-be68-b570b5fde941-tuning-conf-dir\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.272682 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.286151 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.298249 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bfz4v" event={"ID":"ddebe70e-e835-47d3-b46f-1177bc066d81","Type":"ContainerStarted","Data":"36a7644f75b79c433fa601afd0b5176dd67ff57d27639b3c3ffd14a24756f50b"} Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.306285 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.322976 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.329921 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.329972 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.329983 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.330001 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.330021 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:22Z","lastTransitionTime":"2025-12-02T14:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.334743 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.349061 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.365483 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366196 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3acbdd1e-1a6f-407f-be68-b570b5fde941-os-release\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366259 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3acbdd1e-1a6f-407f-be68-b570b5fde941-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366285 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/733ab60d-6038-4a5b-9120-9c4ef1f47774-cni-binary-copy\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366302 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp6gp\" (UniqueName: \"kubernetes.io/projected/733ab60d-6038-4a5b-9120-9c4ef1f47774-kube-api-access-bp6gp\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366324 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-system-cni-dir\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366341 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmdbm\" (UniqueName: \"kubernetes.io/projected/d93bbd5a-650e-4f70-9b1c-868df83a667c-kube-api-access-pmdbm\") pod \"machine-config-daemon-k4kc8\" (UID: \"d93bbd5a-650e-4f70-9b1c-868df83a667c\") " pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366358 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-os-release\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366377 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d93bbd5a-650e-4f70-9b1c-868df83a667c-proxy-tls\") pod \"machine-config-daemon-k4kc8\" (UID: \"d93bbd5a-650e-4f70-9b1c-868df83a667c\") " pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366398 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-cnibin\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366422 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d93bbd5a-650e-4f70-9b1c-868df83a667c-mcd-auth-proxy-config\") pod \"machine-config-daemon-k4kc8\" (UID: \"d93bbd5a-650e-4f70-9b1c-868df83a667c\") " pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366440 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3acbdd1e-1a6f-407f-be68-b570b5fde941-cni-binary-copy\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366458 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-var-lib-cni-multus\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366477 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-var-lib-cni-bin\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366502 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-multus-cni-dir\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366533 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-var-lib-kubelet\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366551 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-etc-kubernetes\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366568 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-multus-socket-dir-parent\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366587 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-hostroot\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366603 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-multus-conf-dir\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366618 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-run-netns\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366638 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d93bbd5a-650e-4f70-9b1c-868df83a667c-rootfs\") pod \"machine-config-daemon-k4kc8\" (UID: \"d93bbd5a-650e-4f70-9b1c-868df83a667c\") " pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366654 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-run-multus-certs\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366681 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/733ab60d-6038-4a5b-9120-9c4ef1f47774-multus-daemon-config\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366705 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3acbdd1e-1a6f-407f-be68-b570b5fde941-cnibin\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366722 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrf9m\" (UniqueName: \"kubernetes.io/projected/3acbdd1e-1a6f-407f-be68-b570b5fde941-kube-api-access-rrf9m\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366735 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3acbdd1e-1a6f-407f-be68-b570b5fde941-os-release\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366739 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-run-k8s-cni-cncf-io\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366788 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-run-k8s-cni-cncf-io\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366815 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3acbdd1e-1a6f-407f-be68-b570b5fde941-system-cni-dir\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366841 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-etc-kubernetes\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366851 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3acbdd1e-1a6f-407f-be68-b570b5fde941-tuning-conf-dir\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366887 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-multus-socket-dir-parent\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366929 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-var-lib-kubelet\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.366978 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-var-lib-cni-multus\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.367154 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3acbdd1e-1a6f-407f-be68-b570b5fde941-system-cni-dir\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.367192 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-system-cni-dir\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.367311 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-var-lib-cni-bin\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.367362 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-cnibin\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.367391 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-run-multus-certs\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.367445 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-multus-conf-dir\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.367469 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-os-release\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.367478 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3acbdd1e-1a6f-407f-be68-b570b5fde941-cnibin\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.367492 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-host-run-netns\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.367535 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-multus-cni-dir\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.367599 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/733ab60d-6038-4a5b-9120-9c4ef1f47774-hostroot\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.367589 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d93bbd5a-650e-4f70-9b1c-868df83a667c-rootfs\") pod \"machine-config-daemon-k4kc8\" (UID: \"d93bbd5a-650e-4f70-9b1c-868df83a667c\") " pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.368251 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/733ab60d-6038-4a5b-9120-9c4ef1f47774-cni-binary-copy\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.368314 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3acbdd1e-1a6f-407f-be68-b570b5fde941-cni-binary-copy\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.368337 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/733ab60d-6038-4a5b-9120-9c4ef1f47774-multus-daemon-config\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.368375 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3acbdd1e-1a6f-407f-be68-b570b5fde941-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.368516 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3acbdd1e-1a6f-407f-be68-b570b5fde941-tuning-conf-dir\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.368527 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d93bbd5a-650e-4f70-9b1c-868df83a667c-mcd-auth-proxy-config\") pod \"machine-config-daemon-k4kc8\" (UID: \"d93bbd5a-650e-4f70-9b1c-868df83a667c\") " pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.372513 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d93bbd5a-650e-4f70-9b1c-868df83a667c-proxy-tls\") pod \"machine-config-daemon-k4kc8\" (UID: \"d93bbd5a-650e-4f70-9b1c-868df83a667c\") " pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.385309 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmdbm\" (UniqueName: \"kubernetes.io/projected/d93bbd5a-650e-4f70-9b1c-868df83a667c-kube-api-access-pmdbm\") pod \"machine-config-daemon-k4kc8\" (UID: \"d93bbd5a-650e-4f70-9b1c-868df83a667c\") " pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.386821 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrf9m\" (UniqueName: \"kubernetes.io/projected/3acbdd1e-1a6f-407f-be68-b570b5fde941-kube-api-access-rrf9m\") pod \"multus-additional-cni-plugins-k6qjb\" (UID: \"3acbdd1e-1a6f-407f-be68-b570b5fde941\") " pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.386857 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp6gp\" (UniqueName: \"kubernetes.io/projected/733ab60d-6038-4a5b-9120-9c4ef1f47774-kube-api-access-bp6gp\") pod \"multus-cd6tj\" (UID: \"733ab60d-6038-4a5b-9120-9c4ef1f47774\") " pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.388019 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.406072 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.417529 4814 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-02 14:24:21 +0000 UTC, rotation deadline is 2026-09-03 13:39:11.295935874 +0000 UTC Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.417608 4814 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6599h9m48.878330561s for next certificate rotation Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.419159 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.432495 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.432529 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.432539 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.432556 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.432565 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:22Z","lastTransitionTime":"2025-12-02T14:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.441111 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.454097 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.468942 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.481739 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.483869 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cd6tj" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.493839 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.494800 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.503445 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:29:22 crc kubenswrapper[4814]: W1202 14:29:22.506964 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3acbdd1e_1a6f_407f_be68_b570b5fde941.slice/crio-f230a3b16ecf625a25051d35145a6d581c13e0b6e4112bad1f0f348afbd7fa3b WatchSource:0}: Error finding container f230a3b16ecf625a25051d35145a6d581c13e0b6e4112bad1f0f348afbd7fa3b: Status 404 returned error can't find the container with id f230a3b16ecf625a25051d35145a6d581c13e0b6e4112bad1f0f348afbd7fa3b Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.511742 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: W1202 14:29:22.517800 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd93bbd5a_650e_4f70_9b1c_868df83a667c.slice/crio-646dbfdf665ad87ebbebaf0563660c402c4fb4ef3b5e9dcd58678db27d8e5294 WatchSource:0}: Error finding container 646dbfdf665ad87ebbebaf0563660c402c4fb4ef3b5e9dcd58678db27d8e5294: Status 404 returned error can't find the container with id 646dbfdf665ad87ebbebaf0563660c402c4fb4ef3b5e9dcd58678db27d8e5294 Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.525267 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.535373 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.535415 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.535425 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.535443 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.535457 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:22Z","lastTransitionTime":"2025-12-02T14:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.541258 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.544679 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gnpv5"] Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.545657 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.548457 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.548786 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.548886 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.549024 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.549121 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.550480 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.550644 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.564722 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.577429 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.591715 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.603733 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.618210 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.629674 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.638418 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.638482 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.638496 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.638514 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.638527 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:22Z","lastTransitionTime":"2025-12-02T14:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.644437 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.653938 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.670892 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675314 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-run-netns\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675375 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovn-node-metrics-cert\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675394 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-openvswitch\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675410 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-cni-netd\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675442 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-ovn\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675471 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh6s8\" (UniqueName: \"kubernetes.io/projected/d5cd5566-9b36-421b-8f02-f3d12db9b829-kube-api-access-lh6s8\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675503 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-systemd\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675582 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-slash\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675605 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-etc-openvswitch\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675695 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-log-socket\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675734 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-env-overrides\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675764 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-systemd-units\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675802 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-node-log\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675828 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovnkube-config\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675849 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovnkube-script-lib\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675884 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-kubelet\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675913 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-var-lib-openvswitch\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675950 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-run-ovn-kubernetes\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675972 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.675995 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-cni-bin\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.710714 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.741174 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.741215 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.741242 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.741259 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.741270 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:22Z","lastTransitionTime":"2025-12-02T14:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.766720 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777434 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovnkube-config\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777474 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovnkube-script-lib\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777491 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-var-lib-openvswitch\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777511 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-kubelet\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777531 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-run-ovn-kubernetes\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777548 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777565 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-cni-bin\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777593 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-run-netns\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777611 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovn-node-metrics-cert\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777621 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-kubelet\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777683 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777711 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-cni-bin\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777688 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-run-ovn-kubernetes\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777736 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-run-netns\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777753 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-var-lib-openvswitch\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777821 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-openvswitch\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777775 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-openvswitch\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777876 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-cni-netd\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777903 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-cni-netd\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777941 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh6s8\" (UniqueName: \"kubernetes.io/projected/d5cd5566-9b36-421b-8f02-f3d12db9b829-kube-api-access-lh6s8\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.777980 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-ovn\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778011 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-systemd\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778057 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-slash\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778086 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-etc-openvswitch\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778132 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-systemd-units\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778154 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-log-socket\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778179 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-env-overrides\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778207 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-node-log\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778265 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-etc-openvswitch\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778297 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovnkube-script-lib\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778323 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-systemd\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778307 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-node-log\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778336 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-systemd-units\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778306 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-ovn\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778367 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-log-socket\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778374 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-slash\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778366 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovnkube-config\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.778960 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-env-overrides\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.782331 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovn-node-metrics-cert\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.789143 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.797060 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh6s8\" (UniqueName: \"kubernetes.io/projected/d5cd5566-9b36-421b-8f02-f3d12db9b829-kube-api-access-lh6s8\") pod \"ovnkube-node-gnpv5\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.803953 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.816489 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.829066 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.842296 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.843411 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.843463 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.843475 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.843510 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.843525 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:22Z","lastTransitionTime":"2025-12-02T14:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.881963 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:22 crc kubenswrapper[4814]: W1202 14:29:22.894897 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5cd5566_9b36_421b_8f02_f3d12db9b829.slice/crio-7c28694b94f1aab9bcdd24717a62591363cf984d31eaf38087b6781e1227eabd WatchSource:0}: Error finding container 7c28694b94f1aab9bcdd24717a62591363cf984d31eaf38087b6781e1227eabd: Status 404 returned error can't find the container with id 7c28694b94f1aab9bcdd24717a62591363cf984d31eaf38087b6781e1227eabd Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.946122 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.946171 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.946183 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.946203 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:22 crc kubenswrapper[4814]: I1202 14:29:22.946214 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:22Z","lastTransitionTime":"2025-12-02T14:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.049296 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.049873 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.049888 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.049904 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.049914 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:23Z","lastTransitionTime":"2025-12-02T14:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.113523 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.118722 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.135185 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.150735 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.152543 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.152571 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.152584 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.152600 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.152611 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:23Z","lastTransitionTime":"2025-12-02T14:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.167283 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.179778 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.193309 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.205289 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.217577 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.273964 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.291559 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.291861 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.291913 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.291925 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.291947 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.291963 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:23Z","lastTransitionTime":"2025-12-02T14:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.303062 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bfz4v" event={"ID":"ddebe70e-e835-47d3-b46f-1177bc066d81","Type":"ContainerStarted","Data":"f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.304954 4814 generic.go:334] "Generic (PLEG): container finished" podID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerID="9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c" exitCode=0 Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.305048 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerDied","Data":"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.305084 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerStarted","Data":"7c28694b94f1aab9bcdd24717a62591363cf984d31eaf38087b6781e1227eabd"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.309576 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.310483 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.310535 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.310550 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"646dbfdf665ad87ebbebaf0563660c402c4fb4ef3b5e9dcd58678db27d8e5294"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.318017 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd6tj" event={"ID":"733ab60d-6038-4a5b-9120-9c4ef1f47774","Type":"ContainerStarted","Data":"b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.318062 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd6tj" event={"ID":"733ab60d-6038-4a5b-9120-9c4ef1f47774","Type":"ContainerStarted","Data":"860261daeb1bdc0db15d93bf53ef77222c9c09258093a4c01472eec1a9ef316c"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.321604 4814 generic.go:334] "Generic (PLEG): container finished" podID="3acbdd1e-1a6f-407f-be68-b570b5fde941" containerID="7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8" exitCode=0 Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.321685 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" event={"ID":"3acbdd1e-1a6f-407f-be68-b570b5fde941","Type":"ContainerDied","Data":"7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.321718 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" event={"ID":"3acbdd1e-1a6f-407f-be68-b570b5fde941","Type":"ContainerStarted","Data":"f230a3b16ecf625a25051d35145a6d581c13e0b6e4112bad1f0f348afbd7fa3b"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.325377 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.339216 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.348690 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.369327 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.379435 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.398365 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.399380 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.399391 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.399409 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.399721 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:23Z","lastTransitionTime":"2025-12-02T14:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.399206 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.412424 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.425927 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.441368 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.456693 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.470095 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.490086 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.502618 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.502831 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.502892 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.502951 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.503033 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:23Z","lastTransitionTime":"2025-12-02T14:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.507448 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.520675 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.531132 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.542622 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.555936 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.566904 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.605067 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.605105 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.605116 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.605131 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.605140 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:23Z","lastTransitionTime":"2025-12-02T14:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.708407 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.708446 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.708457 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.708475 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.708486 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:23Z","lastTransitionTime":"2025-12-02T14:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.813443 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.814034 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.814050 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.814069 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.814080 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:23Z","lastTransitionTime":"2025-12-02T14:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.890673 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:29:23 crc kubenswrapper[4814]: E1202 14:29:23.890889 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:29:31.89087014 +0000 UTC m=+34.015232344 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.917123 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.917184 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.917199 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.917220 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.917252 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:23Z","lastTransitionTime":"2025-12-02T14:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.992297 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.992362 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.992404 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:23 crc kubenswrapper[4814]: I1202 14:29:23.992430 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:23 crc kubenswrapper[4814]: E1202 14:29:23.992594 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:29:23 crc kubenswrapper[4814]: E1202 14:29:23.992691 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:31.992671951 +0000 UTC m=+34.117034155 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:29:23 crc kubenswrapper[4814]: E1202 14:29:23.993108 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:29:23 crc kubenswrapper[4814]: E1202 14:29:23.993149 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:31.993140172 +0000 UTC m=+34.117502376 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:29:23 crc kubenswrapper[4814]: E1202 14:29:23.993252 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:29:23 crc kubenswrapper[4814]: E1202 14:29:23.993274 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:29:23 crc kubenswrapper[4814]: E1202 14:29:23.993289 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:23 crc kubenswrapper[4814]: E1202 14:29:23.993324 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:31.993315697 +0000 UTC m=+34.117677901 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:23 crc kubenswrapper[4814]: E1202 14:29:23.993382 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:29:23 crc kubenswrapper[4814]: E1202 14:29:23.993396 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:29:23 crc kubenswrapper[4814]: E1202 14:29:23.993423 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:23 crc kubenswrapper[4814]: E1202 14:29:23.993453 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:31.99344219 +0000 UTC m=+34.117804394 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.019803 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.019872 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.019885 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.019906 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.019921 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:24Z","lastTransitionTime":"2025-12-02T14:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.122029 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.122071 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.122083 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.122100 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.122111 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:24Z","lastTransitionTime":"2025-12-02T14:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.144412 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:24 crc kubenswrapper[4814]: E1202 14:29:24.144787 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.144522 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:24 crc kubenswrapper[4814]: E1202 14:29:24.144981 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.144441 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:24 crc kubenswrapper[4814]: E1202 14:29:24.145141 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.224209 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.224261 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.224271 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.224287 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.224298 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:24Z","lastTransitionTime":"2025-12-02T14:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.325979 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.326323 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.326408 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.326449 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.326473 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:24Z","lastTransitionTime":"2025-12-02T14:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.326737 4814 generic.go:334] "Generic (PLEG): container finished" podID="3acbdd1e-1a6f-407f-be68-b570b5fde941" containerID="ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14" exitCode=0 Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.326826 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" event={"ID":"3acbdd1e-1a6f-407f-be68-b570b5fde941","Type":"ContainerDied","Data":"ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.331403 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerStarted","Data":"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.331476 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerStarted","Data":"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.331501 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerStarted","Data":"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.331519 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerStarted","Data":"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.331538 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerStarted","Data":"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.331559 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerStarted","Data":"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.346003 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:24Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.359399 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:24Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.373461 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:24Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.385949 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:24Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.405550 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:24Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.419478 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:24Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.428863 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.428897 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.428906 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.428921 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.428929 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:24Z","lastTransitionTime":"2025-12-02T14:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.439663 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:24Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.458483 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:24Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.469712 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:24Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.484799 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:24Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.495306 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:24Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.509475 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:24Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.521411 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:24Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.530792 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.530925 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.531010 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.531107 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.531191 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:24Z","lastTransitionTime":"2025-12-02T14:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.533711 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:24Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.634869 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.634913 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.634923 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.634942 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.634954 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:24Z","lastTransitionTime":"2025-12-02T14:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.737761 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.737811 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.737820 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.737839 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.737850 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:24Z","lastTransitionTime":"2025-12-02T14:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.840635 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.840709 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.840727 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.840756 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.840778 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:24Z","lastTransitionTime":"2025-12-02T14:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.944066 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.944118 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.944128 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.944150 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:24 crc kubenswrapper[4814]: I1202 14:29:24.944162 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:24Z","lastTransitionTime":"2025-12-02T14:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.046586 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.046636 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.046648 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.046666 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.046676 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:25Z","lastTransitionTime":"2025-12-02T14:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.142266 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bg695"] Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.142722 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bg695" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.145022 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.145189 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.145291 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.146377 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.149805 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.149840 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.149850 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.149866 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.149876 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:25Z","lastTransitionTime":"2025-12-02T14:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.167909 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.179741 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.193567 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.206352 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.206603 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a66bac68-7ada-40a6-8407-54161c4ff527-serviceca\") pod \"node-ca-bg695\" (UID: \"a66bac68-7ada-40a6-8407-54161c4ff527\") " pod="openshift-image-registry/node-ca-bg695" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.207107 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hh2l\" (UniqueName: \"kubernetes.io/projected/a66bac68-7ada-40a6-8407-54161c4ff527-kube-api-access-6hh2l\") pod \"node-ca-bg695\" (UID: \"a66bac68-7ada-40a6-8407-54161c4ff527\") " pod="openshift-image-registry/node-ca-bg695" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.207152 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a66bac68-7ada-40a6-8407-54161c4ff527-host\") pod \"node-ca-bg695\" (UID: \"a66bac68-7ada-40a6-8407-54161c4ff527\") " pod="openshift-image-registry/node-ca-bg695" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.227680 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.253181 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.253247 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.253258 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.253275 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.253291 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:25Z","lastTransitionTime":"2025-12-02T14:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.273057 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.304350 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.307974 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hh2l\" (UniqueName: \"kubernetes.io/projected/a66bac68-7ada-40a6-8407-54161c4ff527-kube-api-access-6hh2l\") pod \"node-ca-bg695\" (UID: \"a66bac68-7ada-40a6-8407-54161c4ff527\") " pod="openshift-image-registry/node-ca-bg695" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.308120 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a66bac68-7ada-40a6-8407-54161c4ff527-host\") pod \"node-ca-bg695\" (UID: \"a66bac68-7ada-40a6-8407-54161c4ff527\") " pod="openshift-image-registry/node-ca-bg695" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.308239 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a66bac68-7ada-40a6-8407-54161c4ff527-serviceca\") pod \"node-ca-bg695\" (UID: \"a66bac68-7ada-40a6-8407-54161c4ff527\") " pod="openshift-image-registry/node-ca-bg695" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.308305 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a66bac68-7ada-40a6-8407-54161c4ff527-host\") pod \"node-ca-bg695\" (UID: \"a66bac68-7ada-40a6-8407-54161c4ff527\") " pod="openshift-image-registry/node-ca-bg695" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.309791 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a66bac68-7ada-40a6-8407-54161c4ff527-serviceca\") pod \"node-ca-bg695\" (UID: \"a66bac68-7ada-40a6-8407-54161c4ff527\") " pod="openshift-image-registry/node-ca-bg695" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.340667 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.342114 4814 generic.go:334] "Generic (PLEG): container finished" podID="3acbdd1e-1a6f-407f-be68-b570b5fde941" containerID="05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574" exitCode=0 Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.342165 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" event={"ID":"3acbdd1e-1a6f-407f-be68-b570b5fde941","Type":"ContainerDied","Data":"05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574"} Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.343445 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hh2l\" (UniqueName: \"kubernetes.io/projected/a66bac68-7ada-40a6-8407-54161c4ff527-kube-api-access-6hh2l\") pod \"node-ca-bg695\" (UID: \"a66bac68-7ada-40a6-8407-54161c4ff527\") " pod="openshift-image-registry/node-ca-bg695" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.355497 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.355534 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.355547 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.355568 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.355581 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:25Z","lastTransitionTime":"2025-12-02T14:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.360311 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.380471 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.403184 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.415787 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.430771 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.444806 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.459710 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bg695" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.460030 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.460887 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.460917 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.460925 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.460941 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.460953 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:25Z","lastTransitionTime":"2025-12-02T14:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.479038 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.495439 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.510863 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.524971 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.541915 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.561759 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.564071 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.564117 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.564127 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.564145 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.564156 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:25Z","lastTransitionTime":"2025-12-02T14:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.584971 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.606600 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.622404 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.642560 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.665448 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.669639 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.669705 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.669719 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.669763 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.669780 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:25Z","lastTransitionTime":"2025-12-02T14:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.679135 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.694161 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.709352 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.723068 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:25Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.775509 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.775561 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.775571 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.775586 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.775598 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:25Z","lastTransitionTime":"2025-12-02T14:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.878339 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.878393 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.878405 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.878427 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.878441 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:25Z","lastTransitionTime":"2025-12-02T14:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.981108 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.981600 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.981617 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.981636 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:25 crc kubenswrapper[4814]: I1202 14:29:25.981667 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:25Z","lastTransitionTime":"2025-12-02T14:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.084397 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.084449 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.084459 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.084479 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.084490 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:26Z","lastTransitionTime":"2025-12-02T14:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.144494 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.144604 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.144684 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:26 crc kubenswrapper[4814]: E1202 14:29:26.144831 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:26 crc kubenswrapper[4814]: E1202 14:29:26.144944 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:26 crc kubenswrapper[4814]: E1202 14:29:26.145108 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.187320 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.187371 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.187384 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.187403 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.187418 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:26Z","lastTransitionTime":"2025-12-02T14:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.291204 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.291291 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.291306 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.291329 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.291344 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:26Z","lastTransitionTime":"2025-12-02T14:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.351947 4814 generic.go:334] "Generic (PLEG): container finished" podID="3acbdd1e-1a6f-407f-be68-b570b5fde941" containerID="17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019" exitCode=0 Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.352030 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" event={"ID":"3acbdd1e-1a6f-407f-be68-b570b5fde941","Type":"ContainerDied","Data":"17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.354729 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bg695" event={"ID":"a66bac68-7ada-40a6-8407-54161c4ff527","Type":"ContainerStarted","Data":"0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.354753 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bg695" event={"ID":"a66bac68-7ada-40a6-8407-54161c4ff527","Type":"ContainerStarted","Data":"7dac946077f3f26581372e60a49ed75fd0c27d1caa8c02dc63419cf2a2f82202"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.367536 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.382193 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.395542 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.395604 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.395616 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.395637 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.395650 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:26Z","lastTransitionTime":"2025-12-02T14:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.398457 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.416709 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.435075 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.451382 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.470626 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.491778 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.498496 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.498537 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.498558 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.498585 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.498598 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:26Z","lastTransitionTime":"2025-12-02T14:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.506642 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.526934 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.543170 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.567622 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.582448 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.595838 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.600892 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.600921 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.600931 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.600961 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.600974 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:26Z","lastTransitionTime":"2025-12-02T14:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.609575 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.630410 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.647671 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.660255 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.675548 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.689886 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.704496 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.704540 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.704561 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.704583 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.704597 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:26Z","lastTransitionTime":"2025-12-02T14:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.707967 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.708000 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.708012 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.708030 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.708044 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:26Z","lastTransitionTime":"2025-12-02T14:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.710939 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: E1202 14:29:26.722296 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.724919 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.727905 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.727961 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.727974 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.727992 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.728002 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:26Z","lastTransitionTime":"2025-12-02T14:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:26 crc kubenswrapper[4814]: E1202 14:29:26.743080 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.748215 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.748342 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.748355 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.748374 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.748390 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:26Z","lastTransitionTime":"2025-12-02T14:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.751143 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: E1202 14:29:26.764446 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.765469 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.768341 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.768365 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.768374 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.768390 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.768403 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:26Z","lastTransitionTime":"2025-12-02T14:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.781493 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: E1202 14:29:26.781964 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.787374 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.787422 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.787431 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.787450 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.787465 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:26Z","lastTransitionTime":"2025-12-02T14:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.803061 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: E1202 14:29:26.803269 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: E1202 14:29:26.803382 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.807495 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.807537 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.807563 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.807583 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.807666 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:26Z","lastTransitionTime":"2025-12-02T14:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.827508 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.840869 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.852117 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.866613 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:26Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.913136 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.913198 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.913211 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.913247 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:26 crc kubenswrapper[4814]: I1202 14:29:26.913266 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:26Z","lastTransitionTime":"2025-12-02T14:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.016288 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.016340 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.016353 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.016374 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.016387 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:27Z","lastTransitionTime":"2025-12-02T14:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.118553 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.118595 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.118605 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.118624 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.118635 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:27Z","lastTransitionTime":"2025-12-02T14:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.222312 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.222364 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.222379 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.222399 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.222414 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:27Z","lastTransitionTime":"2025-12-02T14:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.326296 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.326506 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.326530 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.326556 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.326575 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:27Z","lastTransitionTime":"2025-12-02T14:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.371541 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerStarted","Data":"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374"} Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.375858 4814 generic.go:334] "Generic (PLEG): container finished" podID="3acbdd1e-1a6f-407f-be68-b570b5fde941" containerID="30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc" exitCode=0 Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.375921 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" event={"ID":"3acbdd1e-1a6f-407f-be68-b570b5fde941","Type":"ContainerDied","Data":"30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc"} Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.407202 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.428871 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.431393 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.431640 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.431673 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.431697 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.431711 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:27Z","lastTransitionTime":"2025-12-02T14:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.457688 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.475393 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.494671 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.508040 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.535388 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.535437 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.535451 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.535474 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.535490 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:27Z","lastTransitionTime":"2025-12-02T14:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.536090 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.554144 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.566962 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.585643 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.603921 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.621578 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.638456 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.638806 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.638871 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.638939 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.639004 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:27Z","lastTransitionTime":"2025-12-02T14:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.652582 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.666934 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.682998 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:27Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.741756 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.741847 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.741865 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.741893 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.741916 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:27Z","lastTransitionTime":"2025-12-02T14:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.844993 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.845040 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.845050 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.845068 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.845080 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:27Z","lastTransitionTime":"2025-12-02T14:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.947907 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.947941 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.947950 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.947964 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:27 crc kubenswrapper[4814]: I1202 14:29:27.947974 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:27Z","lastTransitionTime":"2025-12-02T14:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.017971 4814 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.050744 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.050794 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.050807 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.050827 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.050842 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:28Z","lastTransitionTime":"2025-12-02T14:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.080392 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.095005 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.108212 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.121304 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.141098 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.150889 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:28 crc kubenswrapper[4814]: E1202 14:29:28.151043 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.151612 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:28 crc kubenswrapper[4814]: E1202 14:29:28.151701 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.151787 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:28 crc kubenswrapper[4814]: E1202 14:29:28.151870 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.154868 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.154917 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.154933 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.154956 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.154975 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:28Z","lastTransitionTime":"2025-12-02T14:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.171075 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.184542 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.195270 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.212680 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.226501 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.246580 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.257704 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.257748 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.257761 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.257782 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.257795 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:28Z","lastTransitionTime":"2025-12-02T14:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.259281 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.271864 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.284429 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.296673 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.309673 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.325146 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.340085 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.360822 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.360871 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.360883 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.360904 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.360917 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:28Z","lastTransitionTime":"2025-12-02T14:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.368854 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.383569 4814 generic.go:334] "Generic (PLEG): container finished" podID="3acbdd1e-1a6f-407f-be68-b570b5fde941" containerID="7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878" exitCode=0 Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.383619 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" event={"ID":"3acbdd1e-1a6f-407f-be68-b570b5fde941","Type":"ContainerDied","Data":"7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878"} Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.391841 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.407133 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.421675 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.438438 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.455444 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.463723 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.463782 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.463798 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.463820 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.463836 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:28Z","lastTransitionTime":"2025-12-02T14:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.470435 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.488007 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.499766 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.514936 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.530811 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.549362 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.567269 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.567725 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.567740 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.568608 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.568636 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:28Z","lastTransitionTime":"2025-12-02T14:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.584351 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.613903 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.641958 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.673203 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.673283 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.673296 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.673318 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.673330 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:28Z","lastTransitionTime":"2025-12-02T14:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.776038 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.776429 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.776529 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.776649 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.776739 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:28Z","lastTransitionTime":"2025-12-02T14:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.879915 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.879971 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.879983 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.880001 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.880014 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:28Z","lastTransitionTime":"2025-12-02T14:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.942979 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.960733 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.973355 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.983737 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.983798 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.983815 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.983835 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.983847 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:28Z","lastTransitionTime":"2025-12-02T14:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:28 crc kubenswrapper[4814]: I1202 14:29:28.990614 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.003186 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.024528 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.046649 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.066179 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.083779 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.088307 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.088335 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.088348 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.088368 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.088383 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:29Z","lastTransitionTime":"2025-12-02T14:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.097832 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.109755 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.121200 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.136022 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.191818 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.191859 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.191870 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.191889 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.191903 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:29Z","lastTransitionTime":"2025-12-02T14:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.296314 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.296351 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.296360 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.296376 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.296387 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:29Z","lastTransitionTime":"2025-12-02T14:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.391830 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerStarted","Data":"d93fde70afb9f4a0d24be90263753e526b60e5e0b9b7437717457f7135561580"} Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.392905 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.392936 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.392996 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.398277 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.398304 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" event={"ID":"3acbdd1e-1a6f-407f-be68-b570b5fde941","Type":"ContainerStarted","Data":"5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef"} Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.398345 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.398531 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.398559 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.398575 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:29Z","lastTransitionTime":"2025-12-02T14:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.411901 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.421027 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.421450 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.430555 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.445581 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.456561 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.468519 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.485216 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.497783 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.501099 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.501170 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.501183 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.501207 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.501223 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:29Z","lastTransitionTime":"2025-12-02T14:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.516175 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.526686 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.551637 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93fde70afb9f4a0d24be90263753e526b60e5e0b9b7437717457f7135561580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.568454 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.584787 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.604543 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.604606 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.604624 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.604650 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.604672 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:29Z","lastTransitionTime":"2025-12-02T14:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.612901 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.629866 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.644162 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.659295 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.674040 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.694467 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.704892 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.710518 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.710566 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.710584 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.710604 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.710616 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:29Z","lastTransitionTime":"2025-12-02T14:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.722016 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.736563 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.753688 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.770741 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.793058 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.814543 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.814597 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.814606 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.814625 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.814637 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:29Z","lastTransitionTime":"2025-12-02T14:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.842895 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93fde70afb9f4a0d24be90263753e526b60e5e0b9b7437717457f7135561580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.875719 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.917189 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.918206 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.918253 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.918264 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.918282 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.918295 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:29Z","lastTransitionTime":"2025-12-02T14:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.962577 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:29 crc kubenswrapper[4814]: I1202 14:29:29.993403 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:29Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.022380 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.022432 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.022444 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.022463 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.022474 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:30Z","lastTransitionTime":"2025-12-02T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.034306 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:30Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.125120 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.125168 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.125176 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.125191 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.125201 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:30Z","lastTransitionTime":"2025-12-02T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.144823 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.144932 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:30 crc kubenswrapper[4814]: E1202 14:29:30.145029 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:30 crc kubenswrapper[4814]: E1202 14:29:30.145139 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.145278 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:30 crc kubenswrapper[4814]: E1202 14:29:30.145346 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.228461 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.228551 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.228576 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.228610 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.228635 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:30Z","lastTransitionTime":"2025-12-02T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.331748 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.331817 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.331828 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.331848 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.331862 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:30Z","lastTransitionTime":"2025-12-02T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.434862 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.434902 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.434912 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.434931 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.434940 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:30Z","lastTransitionTime":"2025-12-02T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.555249 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.555287 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.555297 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.555314 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.555343 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:30Z","lastTransitionTime":"2025-12-02T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.657717 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.657767 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.657777 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.657795 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.657809 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:30Z","lastTransitionTime":"2025-12-02T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.760592 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.760655 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.760670 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.760691 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.760701 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:30Z","lastTransitionTime":"2025-12-02T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.863406 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.863460 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.863475 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.863499 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.863512 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:30Z","lastTransitionTime":"2025-12-02T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.966195 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.966289 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.966312 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.966342 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:30 crc kubenswrapper[4814]: I1202 14:29:30.966360 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:30Z","lastTransitionTime":"2025-12-02T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.069867 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.069923 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.069939 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.069965 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.069984 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:31Z","lastTransitionTime":"2025-12-02T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.173706 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.173770 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.173790 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.173815 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.173829 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:31Z","lastTransitionTime":"2025-12-02T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.278091 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.282479 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.282520 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.282555 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.282582 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:31Z","lastTransitionTime":"2025-12-02T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.385415 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.385479 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.385494 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.385518 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.385565 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:31Z","lastTransitionTime":"2025-12-02T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.407380 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/0.log" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.411348 4814 generic.go:334] "Generic (PLEG): container finished" podID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerID="d93fde70afb9f4a0d24be90263753e526b60e5e0b9b7437717457f7135561580" exitCode=1 Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.411422 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerDied","Data":"d93fde70afb9f4a0d24be90263753e526b60e5e0b9b7437717457f7135561580"} Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.412354 4814 scope.go:117] "RemoveContainer" containerID="d93fde70afb9f4a0d24be90263753e526b60e5e0b9b7437717457f7135561580" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.430102 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.444058 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.467792 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93fde70afb9f4a0d24be90263753e526b60e5e0b9b7437717457f7135561580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d93fde70afb9f4a0d24be90263753e526b60e5e0b9b7437717457f7135561580\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:31Z\\\",\\\"message\\\":\\\"dler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 14:29:31.174486 6101 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:31.174554 6101 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:31.174745 6101 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 14:29:31.174800 6101 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 14:29:31.174819 6101 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 14:29:31.174827 6101 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 14:29:31.174836 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 14:29:31.174850 6101 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 14:29:31.174858 6101 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 14:29:31.174865 6101 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 14:29:31.174873 6101 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 14:29:31.175166 6101 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:31.175868 6101 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.489426 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.489490 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.489506 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.489526 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.489537 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:31Z","lastTransitionTime":"2025-12-02T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.490113 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.504586 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.520606 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.534165 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.552645 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.566646 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.583635 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.592581 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.592616 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.592625 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.592642 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.592653 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:31Z","lastTransitionTime":"2025-12-02T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.613542 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.628049 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.644428 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.655667 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.665690 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:31Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.694432 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.694467 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.694481 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.694501 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.694513 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:31Z","lastTransitionTime":"2025-12-02T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.797984 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.798049 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.798063 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.798087 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.798106 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:31Z","lastTransitionTime":"2025-12-02T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.900695 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.900755 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.900770 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.900791 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.900806 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:31Z","lastTransitionTime":"2025-12-02T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:31 crc kubenswrapper[4814]: I1202 14:29:31.986487 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:29:31 crc kubenswrapper[4814]: E1202 14:29:31.986730 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:29:47.986693243 +0000 UTC m=+50.111055507 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.003273 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.003324 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.003334 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.003355 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.003370 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:32Z","lastTransitionTime":"2025-12-02T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.087662 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.087707 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.087740 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.087761 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.087836 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.087852 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.087864 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.087910 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:48.087895249 +0000 UTC m=+50.212257453 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.087836 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.087929 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.087960 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.087947 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.088034 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.088046 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:48.088022992 +0000 UTC m=+50.212385196 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.088062 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:48.088055903 +0000 UTC m=+50.212418107 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.088084 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:48.088070883 +0000 UTC m=+50.212433267 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.105745 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.105785 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.105796 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.105810 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.105819 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:32Z","lastTransitionTime":"2025-12-02T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.143990 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.144074 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.144039 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.144183 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.144437 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:32 crc kubenswrapper[4814]: E1202 14:29:32.144672 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.208835 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.208883 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.208895 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.208914 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.208923 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:32Z","lastTransitionTime":"2025-12-02T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.311689 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.311744 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.311756 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.311781 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.311796 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:32Z","lastTransitionTime":"2025-12-02T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.414657 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.414712 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.414721 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.414740 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.414750 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:32Z","lastTransitionTime":"2025-12-02T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.417753 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/0.log" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.421341 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerStarted","Data":"abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839"} Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.421782 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.438706 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.453049 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.468117 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.479593 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.490208 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.502498 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.517040 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.517137 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.517184 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.517200 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.517224 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.517258 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:32Z","lastTransitionTime":"2025-12-02T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.533073 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.545124 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.575693 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d93fde70afb9f4a0d24be90263753e526b60e5e0b9b7437717457f7135561580\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:31Z\\\",\\\"message\\\":\\\"dler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 14:29:31.174486 6101 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:31.174554 6101 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:31.174745 6101 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 14:29:31.174800 6101 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 14:29:31.174819 6101 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 14:29:31.174827 6101 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 14:29:31.174836 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 14:29:31.174850 6101 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 14:29:31.174858 6101 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 14:29:31.174865 6101 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 14:29:31.174873 6101 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 14:29:31.175166 6101 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:31.175868 6101 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.590772 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.604970 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.620139 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.620177 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.620190 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.620206 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.620216 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:32Z","lastTransitionTime":"2025-12-02T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.633398 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.649888 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.667887 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:32Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.722586 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.722645 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.722661 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.722685 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.722704 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:32Z","lastTransitionTime":"2025-12-02T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.826039 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.826080 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.826091 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.826110 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.826120 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:32Z","lastTransitionTime":"2025-12-02T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.929042 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.929086 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.929098 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.929114 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:32 crc kubenswrapper[4814]: I1202 14:29:32.929124 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:32Z","lastTransitionTime":"2025-12-02T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.032470 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.032525 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.032542 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.032561 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.032575 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:33Z","lastTransitionTime":"2025-12-02T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.135710 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.135748 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.135761 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.135778 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.135794 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:33Z","lastTransitionTime":"2025-12-02T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.239513 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.239587 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.239601 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.239624 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.239638 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:33Z","lastTransitionTime":"2025-12-02T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.342761 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.342804 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.342814 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.342829 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.342837 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:33Z","lastTransitionTime":"2025-12-02T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.426412 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/1.log" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.426939 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/0.log" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.429427 4814 generic.go:334] "Generic (PLEG): container finished" podID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerID="abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839" exitCode=1 Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.429456 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerDied","Data":"abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839"} Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.429507 4814 scope.go:117] "RemoveContainer" containerID="d93fde70afb9f4a0d24be90263753e526b60e5e0b9b7437717457f7135561580" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.430027 4814 scope.go:117] "RemoveContainer" containerID="abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839" Dec 02 14:29:33 crc kubenswrapper[4814]: E1202 14:29:33.430181 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.443348 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.445196 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.445252 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.445267 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.445289 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.445301 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:33Z","lastTransitionTime":"2025-12-02T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.454107 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.486281 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d93fde70afb9f4a0d24be90263753e526b60e5e0b9b7437717457f7135561580\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:31Z\\\",\\\"message\\\":\\\"dler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 14:29:31.174486 6101 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:31.174554 6101 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:31.174745 6101 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 14:29:31.174800 6101 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 14:29:31.174819 6101 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 14:29:31.174827 6101 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 14:29:31.174836 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 14:29:31.174850 6101 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 14:29:31.174858 6101 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 14:29:31.174865 6101 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 14:29:31.174873 6101 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 14:29:31.175166 6101 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:31.175868 6101 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:32Z\\\",\\\"message\\\":\\\"dbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1202 14:29:32.427077 6233 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.737513ms\\\\nI1202 14:29:32.427047 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1202 14:29:32.427093 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI1202 14:29:32.427101 6233 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.619351ms\\\\nI1202 14:29:32.427105 6233 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.942089ms\\\\nI1202 14:29:32.427146 6233 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:32.427188 6233 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:32.427219 6233 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:32.427305 6233 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.510128 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.525654 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.539970 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.548132 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.548210 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.548265 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.548305 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.548350 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:33Z","lastTransitionTime":"2025-12-02T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.555920 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.570187 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.587080 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.599306 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.614035 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.624467 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.640989 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.651624 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.651672 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.651683 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.651701 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.651715 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:33Z","lastTransitionTime":"2025-12-02T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.654993 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.666838 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:33Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.754472 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.754527 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.754540 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.754563 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.754578 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:33Z","lastTransitionTime":"2025-12-02T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.856990 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.857037 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.857049 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.857081 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.857094 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:33Z","lastTransitionTime":"2025-12-02T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.959475 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.959531 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.959541 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.959555 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:33 crc kubenswrapper[4814]: I1202 14:29:33.959565 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:33Z","lastTransitionTime":"2025-12-02T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.062462 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.062504 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.062513 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.062529 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.062539 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:34Z","lastTransitionTime":"2025-12-02T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.143582 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.143688 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:34 crc kubenswrapper[4814]: E1202 14:29:34.143748 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:34 crc kubenswrapper[4814]: E1202 14:29:34.143839 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.143613 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:34 crc kubenswrapper[4814]: E1202 14:29:34.143932 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.165016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.165060 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.165072 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.165089 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.165099 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:34Z","lastTransitionTime":"2025-12-02T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.267317 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.267373 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.267382 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.267400 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.267412 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:34Z","lastTransitionTime":"2025-12-02T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.370150 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.370184 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.370192 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.370206 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.370215 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:34Z","lastTransitionTime":"2025-12-02T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.434598 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/1.log" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.438156 4814 scope.go:117] "RemoveContainer" containerID="abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839" Dec 02 14:29:34 crc kubenswrapper[4814]: E1202 14:29:34.438360 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.455331 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.471678 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.472622 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.472685 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.472698 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.472723 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.472736 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:34Z","lastTransitionTime":"2025-12-02T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.486125 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.503175 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.521117 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.553355 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.575098 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.575150 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.575165 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.575187 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.575203 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:34Z","lastTransitionTime":"2025-12-02T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.589941 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.607868 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.618118 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz"] Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.618836 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.620934 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.621333 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.624347 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.640622 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.657377 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.671642 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.677677 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.677751 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.677767 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.677787 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.677804 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:34Z","lastTransitionTime":"2025-12-02T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.686887 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.699516 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.710787 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nhbnz\" (UID: \"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.711073 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nhbnz\" (UID: \"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.711205 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t5kh\" (UniqueName: \"kubernetes.io/projected/c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2-kube-api-access-4t5kh\") pod \"ovnkube-control-plane-749d76644c-nhbnz\" (UID: \"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.711577 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nhbnz\" (UID: \"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.720718 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:32Z\\\",\\\"message\\\":\\\"dbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1202 14:29:32.427077 6233 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.737513ms\\\\nI1202 14:29:32.427047 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1202 14:29:32.427093 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI1202 14:29:32.427101 6233 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.619351ms\\\\nI1202 14:29:32.427105 6233 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.942089ms\\\\nI1202 14:29:32.427146 6233 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:32.427188 6233 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:32.427219 6233 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:32.427305 6233 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.735740 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.750433 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.766321 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.781084 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.781138 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.781150 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.781167 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.781179 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:34Z","lastTransitionTime":"2025-12-02T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.782194 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.806603 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.812367 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nhbnz\" (UID: \"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.812412 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nhbnz\" (UID: \"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.812432 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t5kh\" (UniqueName: \"kubernetes.io/projected/c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2-kube-api-access-4t5kh\") pod \"ovnkube-control-plane-749d76644c-nhbnz\" (UID: \"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.812459 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nhbnz\" (UID: \"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.813421 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nhbnz\" (UID: \"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.813444 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nhbnz\" (UID: \"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.819068 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nhbnz\" (UID: \"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.823032 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.834563 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.840874 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t5kh\" (UniqueName: \"kubernetes.io/projected/c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2-kube-api-access-4t5kh\") pod \"ovnkube-control-plane-749d76644c-nhbnz\" (UID: \"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.847712 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.868137 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:32Z\\\",\\\"message\\\":\\\"dbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1202 14:29:32.427077 6233 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.737513ms\\\\nI1202 14:29:32.427047 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1202 14:29:32.427093 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI1202 14:29:32.427101 6233 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.619351ms\\\\nI1202 14:29:32.427105 6233 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.942089ms\\\\nI1202 14:29:32.427146 6233 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:32.427188 6233 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:32.427219 6233 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:32.427305 6233 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.883659 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.883842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.883907 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.883921 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.883943 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.884356 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:34Z","lastTransitionTime":"2025-12-02T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.902466 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.920802 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.931488 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.943941 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: W1202 14:29:34.949759 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc26f7d42_5c37_4ae2_a6a4_c5d7bda572e2.slice/crio-70d378b909bc6159a211bb95972d2ad8705e288053ffc3f0cac405d8bf033481 WatchSource:0}: Error finding container 70d378b909bc6159a211bb95972d2ad8705e288053ffc3f0cac405d8bf033481: Status 404 returned error can't find the container with id 70d378b909bc6159a211bb95972d2ad8705e288053ffc3f0cac405d8bf033481 Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.962046 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.981408 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:34Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.990657 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.990708 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.990726 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.990754 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:34 crc kubenswrapper[4814]: I1202 14:29:34.990778 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:34Z","lastTransitionTime":"2025-12-02T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.006616 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:35Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.094535 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.094594 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.094608 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.094637 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.094654 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:35Z","lastTransitionTime":"2025-12-02T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.198504 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.198546 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.198559 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.198575 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.198586 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:35Z","lastTransitionTime":"2025-12-02T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.301458 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.301519 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.301528 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.301563 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.301577 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:35Z","lastTransitionTime":"2025-12-02T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.404563 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.404611 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.404621 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.404639 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.404648 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:35Z","lastTransitionTime":"2025-12-02T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.442440 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" event={"ID":"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2","Type":"ContainerStarted","Data":"70d378b909bc6159a211bb95972d2ad8705e288053ffc3f0cac405d8bf033481"} Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.507553 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.507595 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.507607 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.507631 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.507647 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:35Z","lastTransitionTime":"2025-12-02T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.611156 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.611200 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.611209 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.611246 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.611258 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:35Z","lastTransitionTime":"2025-12-02T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.714916 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.715006 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.715027 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.715055 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.715073 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:35Z","lastTransitionTime":"2025-12-02T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.820003 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.820066 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.820083 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.820108 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.820125 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:35Z","lastTransitionTime":"2025-12-02T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.922640 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.922998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.923016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.923033 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:35 crc kubenswrapper[4814]: I1202 14:29:35.923044 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:35Z","lastTransitionTime":"2025-12-02T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.025782 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.025833 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.025843 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.025861 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.025871 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.124556 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5n4wm"] Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.125372 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:36 crc kubenswrapper[4814]: E1202 14:29:36.125474 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.129402 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.129436 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.129445 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.129459 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.129471 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.142302 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.144539 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.144601 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:36 crc kubenswrapper[4814]: E1202 14:29:36.144720 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.144735 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:36 crc kubenswrapper[4814]: E1202 14:29:36.144920 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:36 crc kubenswrapper[4814]: E1202 14:29:36.145042 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.160890 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.170894 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.187113 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.202693 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.221443 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.225633 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs\") pod \"network-metrics-daemon-5n4wm\" (UID: \"2ad0a031-3e26-4596-a7a9-cd00a95b7650\") " pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.225712 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlc4z\" (UniqueName: \"kubernetes.io/projected/2ad0a031-3e26-4596-a7a9-cd00a95b7650-kube-api-access-zlc4z\") pod \"network-metrics-daemon-5n4wm\" (UID: \"2ad0a031-3e26-4596-a7a9-cd00a95b7650\") " pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.232380 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.232414 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.232426 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.232445 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.232458 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.236117 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.252347 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.264460 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.286414 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:32Z\\\",\\\"message\\\":\\\"dbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1202 14:29:32.427077 6233 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.737513ms\\\\nI1202 14:29:32.427047 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1202 14:29:32.427093 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI1202 14:29:32.427101 6233 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.619351ms\\\\nI1202 14:29:32.427105 6233 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.942089ms\\\\nI1202 14:29:32.427146 6233 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:32.427188 6233 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:32.427219 6233 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:32.427305 6233 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.303155 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.316850 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.326765 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs\") pod \"network-metrics-daemon-5n4wm\" (UID: \"2ad0a031-3e26-4596-a7a9-cd00a95b7650\") " pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.326891 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlc4z\" (UniqueName: \"kubernetes.io/projected/2ad0a031-3e26-4596-a7a9-cd00a95b7650-kube-api-access-zlc4z\") pod \"network-metrics-daemon-5n4wm\" (UID: \"2ad0a031-3e26-4596-a7a9-cd00a95b7650\") " pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:36 crc kubenswrapper[4814]: E1202 14:29:36.326998 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:29:36 crc kubenswrapper[4814]: E1202 14:29:36.327107 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs podName:2ad0a031-3e26-4596-a7a9-cd00a95b7650 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:36.827080833 +0000 UTC m=+38.951443037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs") pod "network-metrics-daemon-5n4wm" (UID: "2ad0a031-3e26-4596-a7a9-cd00a95b7650") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.335313 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.335371 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.335385 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.335407 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.335420 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.340793 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.346312 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlc4z\" (UniqueName: \"kubernetes.io/projected/2ad0a031-3e26-4596-a7a9-cd00a95b7650-kube-api-access-zlc4z\") pod \"network-metrics-daemon-5n4wm\" (UID: \"2ad0a031-3e26-4596-a7a9-cd00a95b7650\") " pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.355277 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.371315 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.388318 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.406365 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.437779 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.437816 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.437827 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.437842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.437851 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.447625 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" event={"ID":"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2","Type":"ContainerStarted","Data":"1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849"} Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.447692 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" event={"ID":"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2","Type":"ContainerStarted","Data":"c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea"} Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.462874 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.480646 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.498441 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.514322 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.529976 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.541497 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.541555 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.541571 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.541594 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.541603 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.542221 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.555883 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.571926 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.594942 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:32Z\\\",\\\"message\\\":\\\"dbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1202 14:29:32.427077 6233 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.737513ms\\\\nI1202 14:29:32.427047 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1202 14:29:32.427093 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI1202 14:29:32.427101 6233 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.619351ms\\\\nI1202 14:29:32.427105 6233 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.942089ms\\\\nI1202 14:29:32.427146 6233 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:32.427188 6233 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:32.427219 6233 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:32.427305 6233 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.608740 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.626173 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.642849 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.643987 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.644028 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.644060 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.644079 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.644089 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.656356 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.670577 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.681274 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.691438 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.714856 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.746996 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.747037 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.747049 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.747066 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.747077 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.832483 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs\") pod \"network-metrics-daemon-5n4wm\" (UID: \"2ad0a031-3e26-4596-a7a9-cd00a95b7650\") " pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:36 crc kubenswrapper[4814]: E1202 14:29:36.832638 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:29:36 crc kubenswrapper[4814]: E1202 14:29:36.832704 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs podName:2ad0a031-3e26-4596-a7a9-cd00a95b7650 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:37.832685923 +0000 UTC m=+39.957048127 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs") pod "network-metrics-daemon-5n4wm" (UID: "2ad0a031-3e26-4596-a7a9-cd00a95b7650") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.849493 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.849538 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.849549 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.849606 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.849628 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.850948 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.850996 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.851011 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.851031 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.851046 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: E1202 14:29:36.863279 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.866880 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.866933 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.866947 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.866969 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.866981 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: E1202 14:29:36.879381 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.884142 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.884193 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.884206 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.884239 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.884251 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: E1202 14:29:36.900207 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.905048 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.905098 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.905111 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.905134 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.905151 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: E1202 14:29:36.920752 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.925569 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.925622 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.925641 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.925670 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.925688 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:36 crc kubenswrapper[4814]: E1202 14:29:36.942730 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:36Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:36 crc kubenswrapper[4814]: E1202 14:29:36.942936 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.952710 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.952753 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.952763 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.952786 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:36 crc kubenswrapper[4814]: I1202 14:29:36.952799 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:36Z","lastTransitionTime":"2025-12-02T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.054954 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.055015 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.055034 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.055057 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.055071 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:37Z","lastTransitionTime":"2025-12-02T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.158684 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.158769 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.158789 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.158819 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.158841 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:37Z","lastTransitionTime":"2025-12-02T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.262088 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.262147 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.262168 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.262195 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.262211 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:37Z","lastTransitionTime":"2025-12-02T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.366314 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.366389 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.366405 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.366422 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.366433 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:37Z","lastTransitionTime":"2025-12-02T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.468348 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.468404 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.468425 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.468447 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.468466 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:37Z","lastTransitionTime":"2025-12-02T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.570976 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.571042 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.571056 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.571078 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.571093 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:37Z","lastTransitionTime":"2025-12-02T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.674145 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.674188 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.674200 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.674219 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.674257 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:37Z","lastTransitionTime":"2025-12-02T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.777680 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.777731 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.777741 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.777761 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.777773 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:37Z","lastTransitionTime":"2025-12-02T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.844695 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs\") pod \"network-metrics-daemon-5n4wm\" (UID: \"2ad0a031-3e26-4596-a7a9-cd00a95b7650\") " pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:37 crc kubenswrapper[4814]: E1202 14:29:37.844937 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:29:37 crc kubenswrapper[4814]: E1202 14:29:37.845058 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs podName:2ad0a031-3e26-4596-a7a9-cd00a95b7650 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:39.845030511 +0000 UTC m=+41.969392875 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs") pod "network-metrics-daemon-5n4wm" (UID: "2ad0a031-3e26-4596-a7a9-cd00a95b7650") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.881124 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.881185 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.881199 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.881224 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.881263 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:37Z","lastTransitionTime":"2025-12-02T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.983756 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.983807 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.983820 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.983838 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:37 crc kubenswrapper[4814]: I1202 14:29:37.983851 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:37Z","lastTransitionTime":"2025-12-02T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.086263 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.086304 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.086313 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.086330 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.086341 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:38Z","lastTransitionTime":"2025-12-02T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.144736 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.144884 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:38 crc kubenswrapper[4814]: E1202 14:29:38.144940 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:29:38 crc kubenswrapper[4814]: E1202 14:29:38.145075 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.144843 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.144736 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:38 crc kubenswrapper[4814]: E1202 14:29:38.145674 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:38 crc kubenswrapper[4814]: E1202 14:29:38.145742 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.159931 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.174447 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.187858 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.189623 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.189677 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.189690 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.189710 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.189750 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:38Z","lastTransitionTime":"2025-12-02T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.200625 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.214596 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.237827 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.253779 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.267836 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.282993 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.295681 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.295747 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.295759 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.295806 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.295818 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:38Z","lastTransitionTime":"2025-12-02T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.298752 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.316157 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.329080 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.338243 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.347186 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.368511 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:32Z\\\",\\\"message\\\":\\\"dbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1202 14:29:32.427077 6233 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.737513ms\\\\nI1202 14:29:32.427047 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1202 14:29:32.427093 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI1202 14:29:32.427101 6233 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.619351ms\\\\nI1202 14:29:32.427105 6233 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.942089ms\\\\nI1202 14:29:32.427146 6233 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:32.427188 6233 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:32.427219 6233 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:32.427305 6233 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.381511 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.395983 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:38Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.399109 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.399143 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.399156 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.399179 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.399194 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:38Z","lastTransitionTime":"2025-12-02T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.502437 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.502485 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.502496 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.502512 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.502522 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:38Z","lastTransitionTime":"2025-12-02T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.605834 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.605885 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.605899 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.605921 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.605934 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:38Z","lastTransitionTime":"2025-12-02T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.709694 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.709743 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.709755 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.709771 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.709781 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:38Z","lastTransitionTime":"2025-12-02T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.812773 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.812849 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.812868 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.812895 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.812915 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:38Z","lastTransitionTime":"2025-12-02T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.915289 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.915334 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.915347 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.915365 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:38 crc kubenswrapper[4814]: I1202 14:29:38.915376 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:38Z","lastTransitionTime":"2025-12-02T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.017470 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.017513 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.017525 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.017545 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.017556 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:39Z","lastTransitionTime":"2025-12-02T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.120072 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.120152 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.120169 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.120188 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.120198 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:39Z","lastTransitionTime":"2025-12-02T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.222754 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.222797 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.222808 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.222823 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.222840 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:39Z","lastTransitionTime":"2025-12-02T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.325530 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.325596 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.325615 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.325641 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.325657 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:39Z","lastTransitionTime":"2025-12-02T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.428425 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.428472 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.428484 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.428501 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.428513 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:39Z","lastTransitionTime":"2025-12-02T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.531312 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.531356 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.531367 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.531385 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.531397 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:39Z","lastTransitionTime":"2025-12-02T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.633885 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.633919 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.633927 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.633943 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.633953 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:39Z","lastTransitionTime":"2025-12-02T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.736492 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.736539 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.736582 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.736604 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.736616 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:39Z","lastTransitionTime":"2025-12-02T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.839331 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.839380 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.839395 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.839415 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.839427 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:39Z","lastTransitionTime":"2025-12-02T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.865525 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs\") pod \"network-metrics-daemon-5n4wm\" (UID: \"2ad0a031-3e26-4596-a7a9-cd00a95b7650\") " pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:39 crc kubenswrapper[4814]: E1202 14:29:39.866038 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:29:39 crc kubenswrapper[4814]: E1202 14:29:39.866302 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs podName:2ad0a031-3e26-4596-a7a9-cd00a95b7650 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:43.866273238 +0000 UTC m=+45.990635442 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs") pod "network-metrics-daemon-5n4wm" (UID: "2ad0a031-3e26-4596-a7a9-cd00a95b7650") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.943076 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.943165 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.943190 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.943272 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:39 crc kubenswrapper[4814]: I1202 14:29:39.943302 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:39Z","lastTransitionTime":"2025-12-02T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.046210 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.046284 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.046295 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.046314 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.046324 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:40Z","lastTransitionTime":"2025-12-02T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.144408 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.144549 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.144550 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:40 crc kubenswrapper[4814]: E1202 14:29:40.144762 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.144817 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:40 crc kubenswrapper[4814]: E1202 14:29:40.145098 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:40 crc kubenswrapper[4814]: E1202 14:29:40.145198 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:29:40 crc kubenswrapper[4814]: E1202 14:29:40.145305 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.151081 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.151131 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.151149 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.151172 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.151189 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:40Z","lastTransitionTime":"2025-12-02T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.254317 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.254366 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.254379 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.254398 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.254412 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:40Z","lastTransitionTime":"2025-12-02T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.357278 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.357363 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.357533 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.357572 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.357598 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:40Z","lastTransitionTime":"2025-12-02T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.461641 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.463456 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.463516 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.463749 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.463797 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:40Z","lastTransitionTime":"2025-12-02T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.567424 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.567479 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.567489 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.567508 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.567521 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:40Z","lastTransitionTime":"2025-12-02T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.671000 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.671081 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.671106 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.671144 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.671170 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:40Z","lastTransitionTime":"2025-12-02T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.773464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.773507 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.773521 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.773541 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.773556 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:40Z","lastTransitionTime":"2025-12-02T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.876298 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.876371 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.876389 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.876412 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.876430 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:40Z","lastTransitionTime":"2025-12-02T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.979357 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.979429 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.979461 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.979495 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:40 crc kubenswrapper[4814]: I1202 14:29:40.979518 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:40Z","lastTransitionTime":"2025-12-02T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.082468 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.082511 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.082523 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.082542 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.082555 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:41Z","lastTransitionTime":"2025-12-02T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.184715 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.184751 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.184760 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.184774 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.184784 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:41Z","lastTransitionTime":"2025-12-02T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.287818 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.287924 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.287935 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.287954 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.287967 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:41Z","lastTransitionTime":"2025-12-02T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.391123 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.391178 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.391189 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.391207 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.391224 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:41Z","lastTransitionTime":"2025-12-02T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.493521 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.493574 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.493585 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.493602 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.493615 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:41Z","lastTransitionTime":"2025-12-02T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.596538 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.596586 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.596599 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.596615 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.596625 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:41Z","lastTransitionTime":"2025-12-02T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.698923 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.698971 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.698983 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.699003 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.699016 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:41Z","lastTransitionTime":"2025-12-02T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.801871 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.801934 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.801946 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.801965 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.801978 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:41Z","lastTransitionTime":"2025-12-02T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.904185 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.904254 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.904271 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.904290 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:41 crc kubenswrapper[4814]: I1202 14:29:41.904304 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:41Z","lastTransitionTime":"2025-12-02T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.007100 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.007152 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.007169 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.007189 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.007199 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:42Z","lastTransitionTime":"2025-12-02T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.110859 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.110915 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.110928 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.110949 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.110966 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:42Z","lastTransitionTime":"2025-12-02T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.143761 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.143868 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.143929 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.143890 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:42 crc kubenswrapper[4814]: E1202 14:29:42.144019 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:42 crc kubenswrapper[4814]: E1202 14:29:42.144103 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:29:42 crc kubenswrapper[4814]: E1202 14:29:42.144457 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:42 crc kubenswrapper[4814]: E1202 14:29:42.144626 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.214849 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.214907 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.214920 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.214959 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.214973 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:42Z","lastTransitionTime":"2025-12-02T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.318249 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.318307 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.318317 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.318334 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.318347 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:42Z","lastTransitionTime":"2025-12-02T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.421406 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.421460 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.421473 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.421495 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.421507 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:42Z","lastTransitionTime":"2025-12-02T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.523833 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.523884 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.523901 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.523923 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.523938 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:42Z","lastTransitionTime":"2025-12-02T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.627438 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.627511 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.627535 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.627565 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.627586 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:42Z","lastTransitionTime":"2025-12-02T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.730151 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.730207 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.730222 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.730261 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.730277 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:42Z","lastTransitionTime":"2025-12-02T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.833933 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.834012 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.834032 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.834061 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.834083 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:42Z","lastTransitionTime":"2025-12-02T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.937357 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.937403 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.937412 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.937429 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:42 crc kubenswrapper[4814]: I1202 14:29:42.937440 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:42Z","lastTransitionTime":"2025-12-02T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.041521 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.041566 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.041576 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.041592 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.041602 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:43Z","lastTransitionTime":"2025-12-02T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.148145 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.148214 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.148252 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.148281 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.148301 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:43Z","lastTransitionTime":"2025-12-02T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.251384 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.251444 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.251457 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.251481 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.251493 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:43Z","lastTransitionTime":"2025-12-02T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.354454 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.354496 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.354507 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.354524 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.354535 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:43Z","lastTransitionTime":"2025-12-02T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.457441 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.457483 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.457492 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.457511 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.457522 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:43Z","lastTransitionTime":"2025-12-02T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.559872 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.559917 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.559928 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.559948 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.559960 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:43Z","lastTransitionTime":"2025-12-02T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.663341 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.663390 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.663403 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.663424 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.663440 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:43Z","lastTransitionTime":"2025-12-02T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.766509 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.766842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.766967 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.767045 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.767107 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:43Z","lastTransitionTime":"2025-12-02T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.870205 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.870977 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.871064 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.871140 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.871206 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:43Z","lastTransitionTime":"2025-12-02T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.914210 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs\") pod \"network-metrics-daemon-5n4wm\" (UID: \"2ad0a031-3e26-4596-a7a9-cd00a95b7650\") " pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:43 crc kubenswrapper[4814]: E1202 14:29:43.914449 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:29:43 crc kubenswrapper[4814]: E1202 14:29:43.914572 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs podName:2ad0a031-3e26-4596-a7a9-cd00a95b7650 nodeName:}" failed. No retries permitted until 2025-12-02 14:29:51.914541309 +0000 UTC m=+54.038903673 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs") pod "network-metrics-daemon-5n4wm" (UID: "2ad0a031-3e26-4596-a7a9-cd00a95b7650") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.974028 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.974086 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.974103 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.974128 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:43 crc kubenswrapper[4814]: I1202 14:29:43.974141 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:43Z","lastTransitionTime":"2025-12-02T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.077397 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.077451 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.077464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.077486 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.077499 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:44Z","lastTransitionTime":"2025-12-02T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.144426 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.144426 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.144686 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.145047 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:44 crc kubenswrapper[4814]: E1202 14:29:44.145196 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:44 crc kubenswrapper[4814]: E1202 14:29:44.145290 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:29:44 crc kubenswrapper[4814]: E1202 14:29:44.145530 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:44 crc kubenswrapper[4814]: E1202 14:29:44.145712 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.179744 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.179792 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.179803 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.179852 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.179872 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:44Z","lastTransitionTime":"2025-12-02T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.282513 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.282556 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.282565 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.282582 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.282592 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:44Z","lastTransitionTime":"2025-12-02T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.386623 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.386666 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.386675 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.386691 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.386700 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:44Z","lastTransitionTime":"2025-12-02T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.488986 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.489351 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.489470 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.489546 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.489613 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:44Z","lastTransitionTime":"2025-12-02T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.592388 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.592708 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.592770 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.592908 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.592987 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:44Z","lastTransitionTime":"2025-12-02T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.695700 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.695741 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.695751 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.695765 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.695774 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:44Z","lastTransitionTime":"2025-12-02T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.798125 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.798158 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.798167 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.798183 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.798193 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:44Z","lastTransitionTime":"2025-12-02T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.900777 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.900825 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.900835 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.900852 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:44 crc kubenswrapper[4814]: I1202 14:29:44.900861 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:44Z","lastTransitionTime":"2025-12-02T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.003185 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.003239 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.003249 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.003265 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.003275 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:45Z","lastTransitionTime":"2025-12-02T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.106195 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.106269 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.106281 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.106316 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.106329 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:45Z","lastTransitionTime":"2025-12-02T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.209104 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.209165 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.209183 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.209208 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.209220 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:45Z","lastTransitionTime":"2025-12-02T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.311861 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.311909 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.311919 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.311935 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.311945 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:45Z","lastTransitionTime":"2025-12-02T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.414338 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.414388 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.414402 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.414426 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.414442 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:45Z","lastTransitionTime":"2025-12-02T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.516837 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.517736 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.517767 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.517785 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.517796 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:45Z","lastTransitionTime":"2025-12-02T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.619732 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.619772 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.619783 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.619800 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.619812 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:45Z","lastTransitionTime":"2025-12-02T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.722619 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.722657 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.722665 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.722680 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.722689 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:45Z","lastTransitionTime":"2025-12-02T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.824985 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.825026 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.825035 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.825050 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.825060 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:45Z","lastTransitionTime":"2025-12-02T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.927800 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.927850 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.927863 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.927879 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:45 crc kubenswrapper[4814]: I1202 14:29:45.927892 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:45Z","lastTransitionTime":"2025-12-02T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.030742 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.030798 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.030814 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.030831 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.030843 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:46Z","lastTransitionTime":"2025-12-02T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.133552 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.133598 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.133610 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.133628 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.133639 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:46Z","lastTransitionTime":"2025-12-02T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.144041 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.144064 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.144080 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.144042 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:46 crc kubenswrapper[4814]: E1202 14:29:46.144164 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:46 crc kubenswrapper[4814]: E1202 14:29:46.144399 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:29:46 crc kubenswrapper[4814]: E1202 14:29:46.144442 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:46 crc kubenswrapper[4814]: E1202 14:29:46.144496 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.236658 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.236737 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.236757 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.236784 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.236804 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:46Z","lastTransitionTime":"2025-12-02T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.339536 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.339590 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.339600 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.339619 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.339633 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:46Z","lastTransitionTime":"2025-12-02T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.442962 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.443016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.443033 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.443052 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.443062 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:46Z","lastTransitionTime":"2025-12-02T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.547154 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.547221 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.547253 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.547281 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.547298 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:46Z","lastTransitionTime":"2025-12-02T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.651143 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.651203 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.651216 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.651264 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.651278 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:46Z","lastTransitionTime":"2025-12-02T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.755518 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.755565 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.755575 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.755591 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.755600 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:46Z","lastTransitionTime":"2025-12-02T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.858725 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.858777 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.858786 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.858804 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.858814 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:46Z","lastTransitionTime":"2025-12-02T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.961708 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.961773 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.961783 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.961798 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:46 crc kubenswrapper[4814]: I1202 14:29:46.961807 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:46Z","lastTransitionTime":"2025-12-02T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.064016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.064070 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.064110 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.064127 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.064137 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:47Z","lastTransitionTime":"2025-12-02T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.167398 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.167452 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.167464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.167485 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.167496 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:47Z","lastTransitionTime":"2025-12-02T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.269878 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.269931 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.269995 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.270018 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.270031 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:47Z","lastTransitionTime":"2025-12-02T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.327308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.327352 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.327363 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.327382 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.327394 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:47Z","lastTransitionTime":"2025-12-02T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:47 crc kubenswrapper[4814]: E1202 14:29:47.345339 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:47Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.351446 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.351499 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.351513 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.351532 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.351544 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:47Z","lastTransitionTime":"2025-12-02T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:47 crc kubenswrapper[4814]: E1202 14:29:47.366320 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:47Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.370166 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.370216 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.370248 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.370268 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.370283 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:47Z","lastTransitionTime":"2025-12-02T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:47 crc kubenswrapper[4814]: E1202 14:29:47.385472 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:47Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.393329 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.393370 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.393380 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.393408 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.393419 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:47Z","lastTransitionTime":"2025-12-02T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:47 crc kubenswrapper[4814]: E1202 14:29:47.408024 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:47Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.412153 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.412279 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.412306 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.412341 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.412367 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:47Z","lastTransitionTime":"2025-12-02T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:47 crc kubenswrapper[4814]: E1202 14:29:47.426710 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:47Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:47 crc kubenswrapper[4814]: E1202 14:29:47.427081 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.429010 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.429058 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.429070 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.429088 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.429101 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:47Z","lastTransitionTime":"2025-12-02T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.532656 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.532720 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.532739 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.532769 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.532790 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:47Z","lastTransitionTime":"2025-12-02T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.636427 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.636492 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.636500 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.636517 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.636528 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:47Z","lastTransitionTime":"2025-12-02T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.740329 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.740500 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.740519 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.740548 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.740571 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:47Z","lastTransitionTime":"2025-12-02T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.844052 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.844124 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.844143 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.844171 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.844190 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:47Z","lastTransitionTime":"2025-12-02T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.946952 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.946999 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.947012 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.947032 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:47 crc kubenswrapper[4814]: I1202 14:29:47.947047 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:47Z","lastTransitionTime":"2025-12-02T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.049338 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.049383 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.049397 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.049419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.049435 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:48Z","lastTransitionTime":"2025-12-02T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.061156 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.061371 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:30:20.061340518 +0000 UTC m=+82.185702722 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.143939 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.144027 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.144297 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.144167 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.144417 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.144119 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.144559 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.144672 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.152194 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.152253 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.152268 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.152290 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.152305 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:48Z","lastTransitionTime":"2025-12-02T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.157202 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.162661 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.162727 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.162767 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.162790 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.162825 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.162898 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:30:20.162872782 +0000 UTC m=+82.287234986 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.162899 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.162932 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:30:20.162924343 +0000 UTC m=+82.287286547 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.162938 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.162963 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.162976 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.163030 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.163085 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.163102 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.163039 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 14:30:20.163020676 +0000 UTC m=+82.287382870 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:48 crc kubenswrapper[4814]: E1202 14:29:48.163188 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 14:30:20.163164299 +0000 UTC m=+82.287526503 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.175640 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.187684 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.202393 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.215814 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.233358 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.247833 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.255081 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.255132 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.255144 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.255161 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.255173 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:48Z","lastTransitionTime":"2025-12-02T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.261622 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.273727 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.287691 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.302896 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.321388 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.335918 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.350010 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.358375 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.358440 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.358458 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.358480 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.358493 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:48Z","lastTransitionTime":"2025-12-02T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.368073 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.393812 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:32Z\\\",\\\"message\\\":\\\"dbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1202 14:29:32.427077 6233 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.737513ms\\\\nI1202 14:29:32.427047 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1202 14:29:32.427093 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI1202 14:29:32.427101 6233 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.619351ms\\\\nI1202 14:29:32.427105 6233 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.942089ms\\\\nI1202 14:29:32.427146 6233 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:32.427188 6233 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:32.427219 6233 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:32.427305 6233 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.407065 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:48Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.461529 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.461592 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.461605 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.461624 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.461639 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:48Z","lastTransitionTime":"2025-12-02T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.564940 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.564998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.565016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.565036 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.565046 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:48Z","lastTransitionTime":"2025-12-02T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.668509 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.668551 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.668561 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.668577 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.668588 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:48Z","lastTransitionTime":"2025-12-02T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.770406 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.770454 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.770465 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.770484 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.770495 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:48Z","lastTransitionTime":"2025-12-02T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.872879 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.872929 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.872939 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.872957 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.872970 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:48Z","lastTransitionTime":"2025-12-02T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.975875 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.975937 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.975960 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.975991 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:48 crc kubenswrapper[4814]: I1202 14:29:48.976014 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:48Z","lastTransitionTime":"2025-12-02T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.145343 4814 scope.go:117] "RemoveContainer" containerID="abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.196288 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.196343 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.196354 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.196375 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.196388 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:49Z","lastTransitionTime":"2025-12-02T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.299575 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.299638 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.299649 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.299668 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.299678 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:49Z","lastTransitionTime":"2025-12-02T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.402760 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.402797 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.402806 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.402822 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.402834 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:49Z","lastTransitionTime":"2025-12-02T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.495842 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/1.log" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.498539 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerStarted","Data":"6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9"} Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.499703 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.514372 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.514415 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.514433 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.514452 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.514467 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:49Z","lastTransitionTime":"2025-12-02T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.518923 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.531781 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.550258 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:32Z\\\",\\\"message\\\":\\\"dbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1202 14:29:32.427077 6233 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.737513ms\\\\nI1202 14:29:32.427047 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1202 14:29:32.427093 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI1202 14:29:32.427101 6233 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.619351ms\\\\nI1202 14:29:32.427105 6233 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.942089ms\\\\nI1202 14:29:32.427146 6233 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:32.427188 6233 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:32.427219 6233 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:32.427305 6233 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.562165 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.578825 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.598564 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.614814 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.617479 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.617503 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.617511 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.617526 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.617537 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:49Z","lastTransitionTime":"2025-12-02T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.638349 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.657955 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.681514 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.711674 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.719848 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.719890 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.719901 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.719917 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.719928 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:49Z","lastTransitionTime":"2025-12-02T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.733780 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.746692 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.760957 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.775077 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.789881 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.803123 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:49Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.822291 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.822354 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.822369 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.822394 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.822416 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:49Z","lastTransitionTime":"2025-12-02T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.924796 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.924851 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.924864 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.924884 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:49 crc kubenswrapper[4814]: I1202 14:29:49.924897 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:49Z","lastTransitionTime":"2025-12-02T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.027706 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.027746 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.027755 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.027800 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.027811 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:50Z","lastTransitionTime":"2025-12-02T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.130524 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.130930 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.131001 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.131068 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.131137 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:50Z","lastTransitionTime":"2025-12-02T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.146443 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:50 crc kubenswrapper[4814]: E1202 14:29:50.147482 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.148051 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:50 crc kubenswrapper[4814]: E1202 14:29:50.148207 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.148329 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:50 crc kubenswrapper[4814]: E1202 14:29:50.148432 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.148546 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:50 crc kubenswrapper[4814]: E1202 14:29:50.148653 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.231618 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.233482 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.233675 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.233754 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.233831 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.233917 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:50Z","lastTransitionTime":"2025-12-02T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.244300 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.248127 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.264483 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.278636 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.302294 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.320051 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.336059 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.336760 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.336806 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.336820 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.336836 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.336846 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:50Z","lastTransitionTime":"2025-12-02T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.351539 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.371571 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.389449 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.407461 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.419254 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.438676 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.439730 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.439787 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.439804 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.439829 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.439848 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:50Z","lastTransitionTime":"2025-12-02T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.459289 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.472789 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.489483 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.504564 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/2.log" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.505271 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/1.log" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.506391 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.508668 4814 generic.go:334] "Generic (PLEG): container finished" podID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerID="6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9" exitCode=1 Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.508770 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerDied","Data":"6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9"} Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.508893 4814 scope.go:117] "RemoveContainer" containerID="abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.509683 4814 scope.go:117] "RemoveContainer" containerID="6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9" Dec 02 14:29:50 crc kubenswrapper[4814]: E1202 14:29:50.509927 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.528393 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:32Z\\\",\\\"message\\\":\\\"dbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1202 14:29:32.427077 6233 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.737513ms\\\\nI1202 14:29:32.427047 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1202 14:29:32.427093 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI1202 14:29:32.427101 6233 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.619351ms\\\\nI1202 14:29:32.427105 6233 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.942089ms\\\\nI1202 14:29:32.427146 6233 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:32.427188 6233 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:32.427219 6233 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:32.427305 6233 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.542241 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.542286 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.542298 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.542317 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.542330 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:50Z","lastTransitionTime":"2025-12-02T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.545516 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.559306 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5486af4-e8de-4a7b-a010-1addc7fe0f90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3085bff468e4e77202bec19f8991e1747d96a401ad61a47c7f5400dc210e9dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a770612ae67e1ad2a03714776995463980ed3dd8e46434adac446ec17495b21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6009cf24f811b4a07cf2ed259ceb0ff6d9a6a286ae7df4ea87d1873864c47100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.572312 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.586322 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.611081 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.626942 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.638437 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.645162 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.645193 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.645205 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.645224 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.645254 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:50Z","lastTransitionTime":"2025-12-02T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.652664 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.677853 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc9674e74d6ef66c0d7b301de0e7157f86e1f0e3345284a9e2eb0a0aaf7f839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:32Z\\\",\\\"message\\\":\\\"dbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI1202 14:29:32.427077 6233 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 1.737513ms\\\\nI1202 14:29:32.427047 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1202 14:29:32.427093 6233 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI1202 14:29:32.427101 6233 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.619351ms\\\\nI1202 14:29:32.427105 6233 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 1.942089ms\\\\nI1202 14:29:32.427146 6233 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:32.427188 6233 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:32.427219 6233 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:32.427305 6233 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"message\\\":\\\":29:50.173720 6451 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:50.171784 6451 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:50.174448 6451 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 14:29:50.174498 6451 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 14:29:50.174525 6451 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 14:29:50.174532 6451 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 14:29:50.174569 6451 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 14:29:50.174577 6451 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 14:29:50.174585 6451 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 14:29:50.174599 6451 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 14:29:50.174605 6451 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 14:29:50.174614 6451 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 14:29:50.174660 6451 factory.go:656] Stopping watch factory\\\\nI1202 14:29:50.174892 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:50.174944 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:50.175048 6451 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.689963 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.703109 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.714398 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.728868 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.739282 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.747853 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.747886 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.747895 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.747913 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.747926 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:50Z","lastTransitionTime":"2025-12-02T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.752012 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.763940 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.783093 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.797123 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:50Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.850879 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.850954 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.850978 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.851014 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.851040 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:50Z","lastTransitionTime":"2025-12-02T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.954335 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.954403 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.954426 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.954455 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:50 crc kubenswrapper[4814]: I1202 14:29:50.954476 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:50Z","lastTransitionTime":"2025-12-02T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.058020 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.058083 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.058102 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.058142 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.058160 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:51Z","lastTransitionTime":"2025-12-02T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.162774 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.162843 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.162869 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.163009 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.163028 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:51Z","lastTransitionTime":"2025-12-02T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.266392 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.266442 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.266454 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.266473 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.266487 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:51Z","lastTransitionTime":"2025-12-02T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.369772 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.369857 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.369880 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.369915 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.369940 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:51Z","lastTransitionTime":"2025-12-02T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.473682 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.473764 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.473784 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.473813 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.473832 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:51Z","lastTransitionTime":"2025-12-02T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.515596 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/2.log" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.519701 4814 scope.go:117] "RemoveContainer" containerID="6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9" Dec 02 14:29:51 crc kubenswrapper[4814]: E1202 14:29:51.519878 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.536477 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.551191 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.566180 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.576454 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.576516 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.576530 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.576551 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.576565 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:51Z","lastTransitionTime":"2025-12-02T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.589169 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.610584 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.627026 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.641687 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5486af4-e8de-4a7b-a010-1addc7fe0f90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3085bff468e4e77202bec19f8991e1747d96a401ad61a47c7f5400dc210e9dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a770612ae67e1ad2a03714776995463980ed3dd8e46434adac446ec17495b21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6009cf24f811b4a07cf2ed259ceb0ff6d9a6a286ae7df4ea87d1873864c47100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.656724 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.670106 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.679557 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.679602 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.679624 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.679649 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.679663 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:51Z","lastTransitionTime":"2025-12-02T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.687766 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.701289 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.712606 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.726515 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.741935 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.766625 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.782436 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.782471 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.782482 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.782500 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.782512 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:51Z","lastTransitionTime":"2025-12-02T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.791117 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.805029 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.830759 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"message\\\":\\\":29:50.173720 6451 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:50.171784 6451 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:50.174448 6451 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 14:29:50.174498 6451 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 14:29:50.174525 6451 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 14:29:50.174532 6451 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 14:29:50.174569 6451 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 14:29:50.174577 6451 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 14:29:50.174585 6451 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 14:29:50.174599 6451 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 14:29:50.174605 6451 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 14:29:50.174614 6451 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 14:29:50.174660 6451 factory.go:656] Stopping watch factory\\\\nI1202 14:29:50.174892 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:50.174944 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:50.175048 6451 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:51Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.886530 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.886576 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.886586 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.886603 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.886614 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:51Z","lastTransitionTime":"2025-12-02T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.989397 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.989480 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.989567 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.989615 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:51 crc kubenswrapper[4814]: I1202 14:29:51.989638 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:51Z","lastTransitionTime":"2025-12-02T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.006063 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs\") pod \"network-metrics-daemon-5n4wm\" (UID: \"2ad0a031-3e26-4596-a7a9-cd00a95b7650\") " pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:52 crc kubenswrapper[4814]: E1202 14:29:52.006289 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:29:52 crc kubenswrapper[4814]: E1202 14:29:52.006356 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs podName:2ad0a031-3e26-4596-a7a9-cd00a95b7650 nodeName:}" failed. No retries permitted until 2025-12-02 14:30:08.00633707 +0000 UTC m=+70.130699274 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs") pod "network-metrics-daemon-5n4wm" (UID: "2ad0a031-3e26-4596-a7a9-cd00a95b7650") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.092972 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.093025 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.093038 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.093059 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.093074 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:52Z","lastTransitionTime":"2025-12-02T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.144271 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.144323 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.144291 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.144281 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:52 crc kubenswrapper[4814]: E1202 14:29:52.144480 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:52 crc kubenswrapper[4814]: E1202 14:29:52.144658 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:29:52 crc kubenswrapper[4814]: E1202 14:29:52.144851 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:52 crc kubenswrapper[4814]: E1202 14:29:52.144922 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.195899 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.195951 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.195967 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.195992 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.196013 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:52Z","lastTransitionTime":"2025-12-02T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.299777 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.299842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.299859 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.299884 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.299905 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:52Z","lastTransitionTime":"2025-12-02T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.403828 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.403912 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.403928 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.403948 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.403959 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:52Z","lastTransitionTime":"2025-12-02T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.506836 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.506903 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.506915 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.506937 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.506951 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:52Z","lastTransitionTime":"2025-12-02T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.609856 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.609915 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.609925 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.609942 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.609955 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:52Z","lastTransitionTime":"2025-12-02T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.713862 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.713947 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.713959 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.713984 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.713996 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:52Z","lastTransitionTime":"2025-12-02T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.817844 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.817917 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.817939 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.817967 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.817991 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:52Z","lastTransitionTime":"2025-12-02T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.920707 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.920756 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.920771 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.920792 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:52 crc kubenswrapper[4814]: I1202 14:29:52.920807 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:52Z","lastTransitionTime":"2025-12-02T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.024495 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.024576 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.024597 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.024634 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.024657 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:53Z","lastTransitionTime":"2025-12-02T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.127414 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.127492 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.127513 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.127543 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.127564 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:53Z","lastTransitionTime":"2025-12-02T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.230716 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.230794 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.230811 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.230839 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.230884 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:53Z","lastTransitionTime":"2025-12-02T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.334156 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.334249 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.334259 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.334279 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.334297 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:53Z","lastTransitionTime":"2025-12-02T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.437431 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.437499 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.437511 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.437539 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.437552 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:53Z","lastTransitionTime":"2025-12-02T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.540916 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.540993 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.541014 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.541041 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.541062 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:53Z","lastTransitionTime":"2025-12-02T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.644955 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.645023 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.645036 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.645059 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.645073 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:53Z","lastTransitionTime":"2025-12-02T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.748925 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.749002 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.749019 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.749047 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.749065 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:53Z","lastTransitionTime":"2025-12-02T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.852873 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.853368 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.853383 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.853402 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.853416 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:53Z","lastTransitionTime":"2025-12-02T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.957153 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.957222 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.957272 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.957300 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:53 crc kubenswrapper[4814]: I1202 14:29:53.957320 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:53Z","lastTransitionTime":"2025-12-02T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.061291 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.061354 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.061366 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.061390 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.061404 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:54Z","lastTransitionTime":"2025-12-02T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.144905 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.144971 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.145068 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:54 crc kubenswrapper[4814]: E1202 14:29:54.145141 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.145153 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:54 crc kubenswrapper[4814]: E1202 14:29:54.145363 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:54 crc kubenswrapper[4814]: E1202 14:29:54.145497 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:54 crc kubenswrapper[4814]: E1202 14:29:54.145579 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.165051 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.165108 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.165121 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.165139 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.165155 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:54Z","lastTransitionTime":"2025-12-02T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.269130 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.269183 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.269196 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.269219 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.269251 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:54Z","lastTransitionTime":"2025-12-02T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.372535 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.372595 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.372608 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.372629 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.372643 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:54Z","lastTransitionTime":"2025-12-02T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.475834 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.475889 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.475898 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.475914 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.475940 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:54Z","lastTransitionTime":"2025-12-02T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.578394 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.578484 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.578500 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.578549 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.578577 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:54Z","lastTransitionTime":"2025-12-02T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.680802 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.680844 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.680854 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.680868 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.680878 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:54Z","lastTransitionTime":"2025-12-02T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.783837 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.783900 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.783912 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.783938 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.783951 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:54Z","lastTransitionTime":"2025-12-02T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.886513 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.886563 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.886576 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.886605 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.886631 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:54Z","lastTransitionTime":"2025-12-02T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.988968 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.989019 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.989029 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.989047 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:54 crc kubenswrapper[4814]: I1202 14:29:54.989058 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:54Z","lastTransitionTime":"2025-12-02T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.092159 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.092202 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.092211 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.092251 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.092265 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:55Z","lastTransitionTime":"2025-12-02T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.196939 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.197008 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.197032 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.197062 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.197081 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:55Z","lastTransitionTime":"2025-12-02T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.300821 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.300872 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.300883 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.300900 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.300912 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:55Z","lastTransitionTime":"2025-12-02T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.404304 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.404359 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.404370 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.404392 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.404405 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:55Z","lastTransitionTime":"2025-12-02T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.507320 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.507370 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.507382 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.507402 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.507415 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:55Z","lastTransitionTime":"2025-12-02T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.610217 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.610308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.610328 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.610351 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.610366 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:55Z","lastTransitionTime":"2025-12-02T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.713384 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.713446 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.713458 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.713479 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.713494 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:55Z","lastTransitionTime":"2025-12-02T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.816641 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.816714 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.816726 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.816746 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.816759 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:55Z","lastTransitionTime":"2025-12-02T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.920525 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.920571 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.920582 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.920600 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:55 crc kubenswrapper[4814]: I1202 14:29:55.920613 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:55Z","lastTransitionTime":"2025-12-02T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.023410 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.023501 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.023515 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.023569 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.023583 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:56Z","lastTransitionTime":"2025-12-02T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.126417 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.126495 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.126509 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.126531 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.126548 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:56Z","lastTransitionTime":"2025-12-02T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.144176 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.144273 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.144278 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:56 crc kubenswrapper[4814]: E1202 14:29:56.144398 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.144452 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:56 crc kubenswrapper[4814]: E1202 14:29:56.144548 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:56 crc kubenswrapper[4814]: E1202 14:29:56.144743 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:56 crc kubenswrapper[4814]: E1202 14:29:56.144778 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.229126 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.229181 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.229191 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.229211 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.229221 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:56Z","lastTransitionTime":"2025-12-02T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.332202 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.332268 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.332280 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.332297 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.332308 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:56Z","lastTransitionTime":"2025-12-02T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.435754 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.438718 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.438754 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.438782 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.438813 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:56Z","lastTransitionTime":"2025-12-02T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.541875 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.541930 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.541941 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.541961 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.541973 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:56Z","lastTransitionTime":"2025-12-02T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.644071 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.644122 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.644136 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.644155 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.644167 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:56Z","lastTransitionTime":"2025-12-02T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.748142 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.748219 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.748273 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.748314 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.748337 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:56Z","lastTransitionTime":"2025-12-02T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.851975 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.852067 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.852085 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.852110 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.852127 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:56Z","lastTransitionTime":"2025-12-02T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.955431 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.955503 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.955512 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.955534 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:56 crc kubenswrapper[4814]: I1202 14:29:56.955547 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:56Z","lastTransitionTime":"2025-12-02T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.058583 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.058656 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.058673 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.058695 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.058713 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:57Z","lastTransitionTime":"2025-12-02T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.162042 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.162124 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.162143 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.162173 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.162199 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:57Z","lastTransitionTime":"2025-12-02T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.265019 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.265507 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.265638 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.265768 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.265904 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:57Z","lastTransitionTime":"2025-12-02T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.368634 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.368674 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.368686 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.368704 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.368720 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:57Z","lastTransitionTime":"2025-12-02T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.471281 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.471357 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.471382 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.471418 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.471443 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:57Z","lastTransitionTime":"2025-12-02T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.574001 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.574055 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.574070 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.574095 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.574111 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:57Z","lastTransitionTime":"2025-12-02T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.677774 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.677829 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.677844 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.677865 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.677878 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:57Z","lastTransitionTime":"2025-12-02T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.768214 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.768297 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.768310 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.768331 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.768346 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:57Z","lastTransitionTime":"2025-12-02T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:57 crc kubenswrapper[4814]: E1202 14:29:57.784748 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:57Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.790672 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.790751 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.790767 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.790788 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.790802 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:57Z","lastTransitionTime":"2025-12-02T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:57 crc kubenswrapper[4814]: E1202 14:29:57.805093 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:57Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.809374 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.809413 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.809429 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.809451 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.809468 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:57Z","lastTransitionTime":"2025-12-02T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:57 crc kubenswrapper[4814]: E1202 14:29:57.823856 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:57Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.828753 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.828791 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.828808 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.828832 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.828847 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:57Z","lastTransitionTime":"2025-12-02T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:57 crc kubenswrapper[4814]: E1202 14:29:57.846940 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:57Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.852196 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.852279 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.852299 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.852322 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.852335 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:57Z","lastTransitionTime":"2025-12-02T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:57 crc kubenswrapper[4814]: E1202 14:29:57.869784 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:57Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:57 crc kubenswrapper[4814]: E1202 14:29:57.869950 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.871904 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.871938 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.871950 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.871975 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.871989 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:57Z","lastTransitionTime":"2025-12-02T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.974772 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.974839 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.974852 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.974874 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:57 crc kubenswrapper[4814]: I1202 14:29:57.974887 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:57Z","lastTransitionTime":"2025-12-02T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.078700 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.078772 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.078789 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.078816 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.078837 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:58Z","lastTransitionTime":"2025-12-02T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.143762 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.143812 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.143837 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.143762 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:29:58 crc kubenswrapper[4814]: E1202 14:29:58.143960 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:29:58 crc kubenswrapper[4814]: E1202 14:29:58.144133 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:29:58 crc kubenswrapper[4814]: E1202 14:29:58.144374 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:29:58 crc kubenswrapper[4814]: E1202 14:29:58.144419 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.166144 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.178718 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.181907 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.181952 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.181962 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.181982 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.181995 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:58Z","lastTransitionTime":"2025-12-02T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.192603 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.207407 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.222048 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.234364 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.253712 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.267539 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.284420 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.284479 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.284517 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.284537 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.284550 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:58Z","lastTransitionTime":"2025-12-02T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.290870 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"message\\\":\\\":29:50.173720 6451 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:50.171784 6451 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:50.174448 6451 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 14:29:50.174498 6451 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 14:29:50.174525 6451 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 14:29:50.174532 6451 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 14:29:50.174569 6451 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 14:29:50.174577 6451 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 14:29:50.174585 6451 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 14:29:50.174599 6451 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 14:29:50.174605 6451 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 14:29:50.174614 6451 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 14:29:50.174660 6451 factory.go:656] Stopping watch factory\\\\nI1202 14:29:50.174892 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:50.174944 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:50.175048 6451 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.307128 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.323415 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.336827 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.360928 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.377388 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.387539 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.387976 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.388158 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.388254 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.388279 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:58Z","lastTransitionTime":"2025-12-02T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.397188 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.412272 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5486af4-e8de-4a7b-a010-1addc7fe0f90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3085bff468e4e77202bec19f8991e1747d96a401ad61a47c7f5400dc210e9dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a770612ae67e1ad2a03714776995463980ed3dd8e46434adac446ec17495b21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6009cf24f811b4a07cf2ed259ceb0ff6d9a6a286ae7df4ea87d1873864c47100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.437350 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.450807 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:29:58Z is after 2025-08-24T17:21:41Z" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.491040 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.491397 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.491479 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.491580 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.491697 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:58Z","lastTransitionTime":"2025-12-02T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.594996 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.595053 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.595066 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.595087 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.595098 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:58Z","lastTransitionTime":"2025-12-02T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.697386 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.697431 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.697444 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.697462 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.697475 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:58Z","lastTransitionTime":"2025-12-02T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.800510 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.800563 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.800575 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.800593 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.800606 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:58Z","lastTransitionTime":"2025-12-02T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.903395 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.903457 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.903472 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.903495 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:58 crc kubenswrapper[4814]: I1202 14:29:58.903508 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:58Z","lastTransitionTime":"2025-12-02T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.005988 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.006050 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.006066 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.006090 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.006103 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:59Z","lastTransitionTime":"2025-12-02T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.109504 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.109569 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.109579 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.109599 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.109611 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:59Z","lastTransitionTime":"2025-12-02T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.212617 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.212689 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.212708 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.212736 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.212755 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:59Z","lastTransitionTime":"2025-12-02T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.316394 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.316449 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.316463 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.316487 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.316506 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:59Z","lastTransitionTime":"2025-12-02T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.420323 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.420372 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.420384 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.420404 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.420419 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:59Z","lastTransitionTime":"2025-12-02T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.523670 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.523723 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.523736 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.523758 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.523771 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:59Z","lastTransitionTime":"2025-12-02T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.627303 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.627376 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.627396 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.627424 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.627446 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:59Z","lastTransitionTime":"2025-12-02T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.731315 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.731367 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.731380 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.731398 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.731411 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:59Z","lastTransitionTime":"2025-12-02T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.834567 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.834637 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.834653 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.834678 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.834696 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:59Z","lastTransitionTime":"2025-12-02T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.938154 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.938267 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.938283 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.938311 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:29:59 crc kubenswrapper[4814]: I1202 14:29:59.938330 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:29:59Z","lastTransitionTime":"2025-12-02T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.042091 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.042171 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.042190 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.042219 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.042264 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:00Z","lastTransitionTime":"2025-12-02T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.144750 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.144762 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.144828 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.144975 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.145146 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.145500 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.145567 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.145654 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.145724 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:00Z","lastTransitionTime":"2025-12-02T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:00 crc kubenswrapper[4814]: E1202 14:30:00.146066 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:00 crc kubenswrapper[4814]: E1202 14:30:00.146382 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:00 crc kubenswrapper[4814]: E1202 14:30:00.146548 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:00 crc kubenswrapper[4814]: E1202 14:30:00.146652 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.248420 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.248704 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.248810 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.248883 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.248944 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:00Z","lastTransitionTime":"2025-12-02T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.351584 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.351626 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.351636 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.351653 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.351665 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:00Z","lastTransitionTime":"2025-12-02T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.454918 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.456215 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.456490 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.456778 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.456988 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:00Z","lastTransitionTime":"2025-12-02T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.560517 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.560865 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.560997 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.561131 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.561250 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:00Z","lastTransitionTime":"2025-12-02T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.663934 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.664287 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.664386 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.664490 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.664579 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:00Z","lastTransitionTime":"2025-12-02T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.766847 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.766893 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.766904 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.766927 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.766942 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:00Z","lastTransitionTime":"2025-12-02T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.869362 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.869416 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.869429 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.869448 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.869460 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:00Z","lastTransitionTime":"2025-12-02T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.972042 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.972092 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.972105 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.972123 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:00 crc kubenswrapper[4814]: I1202 14:30:00.972136 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:00Z","lastTransitionTime":"2025-12-02T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.074404 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.074997 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.075080 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.075153 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.075211 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:01Z","lastTransitionTime":"2025-12-02T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.177562 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.177639 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.177652 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.177676 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.177694 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:01Z","lastTransitionTime":"2025-12-02T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.280159 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.280202 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.280211 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.280251 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.280261 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:01Z","lastTransitionTime":"2025-12-02T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.382693 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.382768 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.382783 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.382801 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.382812 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:01Z","lastTransitionTime":"2025-12-02T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.486073 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.486770 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.486874 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.486950 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.487036 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:01Z","lastTransitionTime":"2025-12-02T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.589916 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.590032 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.590059 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.590105 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.590131 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:01Z","lastTransitionTime":"2025-12-02T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.692538 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.692595 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.692609 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.692631 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.692643 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:01Z","lastTransitionTime":"2025-12-02T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.795065 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.795106 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.795119 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.795138 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.795150 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:01Z","lastTransitionTime":"2025-12-02T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.897342 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.897395 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.897409 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.897428 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:01 crc kubenswrapper[4814]: I1202 14:30:01.897442 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:01Z","lastTransitionTime":"2025-12-02T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:01.999976 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.000039 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.000051 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.000070 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.000084 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:02Z","lastTransitionTime":"2025-12-02T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.103729 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.103786 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.103801 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.103824 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.103839 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:02Z","lastTransitionTime":"2025-12-02T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.144298 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.144373 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.144401 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.144443 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:02 crc kubenswrapper[4814]: E1202 14:30:02.144553 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:02 crc kubenswrapper[4814]: E1202 14:30:02.144641 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:02 crc kubenswrapper[4814]: E1202 14:30:02.144997 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:02 crc kubenswrapper[4814]: E1202 14:30:02.145363 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.206405 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.206446 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.206457 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.206471 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.206483 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:02Z","lastTransitionTime":"2025-12-02T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.308931 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.308993 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.309012 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.309037 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.309058 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:02Z","lastTransitionTime":"2025-12-02T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.411789 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.411842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.411855 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.411875 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.411888 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:02Z","lastTransitionTime":"2025-12-02T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.514819 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.514876 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.514889 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.514908 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.514920 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:02Z","lastTransitionTime":"2025-12-02T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.618446 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.618509 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.618533 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.618568 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.618593 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:02Z","lastTransitionTime":"2025-12-02T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.720915 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.720971 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.720984 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.721004 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.721016 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:02Z","lastTransitionTime":"2025-12-02T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.824377 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.824429 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.824444 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.824464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.824480 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:02Z","lastTransitionTime":"2025-12-02T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.926793 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.926835 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.926846 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.926862 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:02 crc kubenswrapper[4814]: I1202 14:30:02.926872 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:02Z","lastTransitionTime":"2025-12-02T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.030348 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.030831 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.030845 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.030874 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.030890 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:03Z","lastTransitionTime":"2025-12-02T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.133545 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.133600 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.133612 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.133630 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.133641 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:03Z","lastTransitionTime":"2025-12-02T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.236829 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.236909 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.236935 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.236970 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.237038 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:03Z","lastTransitionTime":"2025-12-02T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.339916 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.339994 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.340012 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.340039 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.340060 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:03Z","lastTransitionTime":"2025-12-02T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.443472 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.443541 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.443562 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.443599 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.443624 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:03Z","lastTransitionTime":"2025-12-02T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.545955 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.546021 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.546040 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.546064 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.546081 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:03Z","lastTransitionTime":"2025-12-02T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.650017 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.650545 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.650738 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.650986 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.651150 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:03Z","lastTransitionTime":"2025-12-02T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.754034 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.754120 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.754144 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.754179 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.754199 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:03Z","lastTransitionTime":"2025-12-02T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.858422 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.858508 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.858535 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.858716 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.858747 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:03Z","lastTransitionTime":"2025-12-02T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.962058 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.962108 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.962121 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.962140 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:03 crc kubenswrapper[4814]: I1202 14:30:03.962150 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:03Z","lastTransitionTime":"2025-12-02T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.065483 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.065534 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.065547 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.065566 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.065580 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:04Z","lastTransitionTime":"2025-12-02T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.144616 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.144616 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.144766 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.144786 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:04 crc kubenswrapper[4814]: E1202 14:30:04.144900 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:04 crc kubenswrapper[4814]: E1202 14:30:04.145041 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:04 crc kubenswrapper[4814]: E1202 14:30:04.145215 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:04 crc kubenswrapper[4814]: E1202 14:30:04.145312 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.167947 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.167998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.168008 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.168026 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.168038 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:04Z","lastTransitionTime":"2025-12-02T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.270761 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.270811 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.270820 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.270840 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.270850 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:04Z","lastTransitionTime":"2025-12-02T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.374103 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.374147 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.374157 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.374172 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.374182 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:04Z","lastTransitionTime":"2025-12-02T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.476595 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.476642 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.476664 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.476688 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.476699 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:04Z","lastTransitionTime":"2025-12-02T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.579947 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.580002 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.580014 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.580037 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.580051 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:04Z","lastTransitionTime":"2025-12-02T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.683547 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.683915 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.683984 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.684410 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.684518 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:04Z","lastTransitionTime":"2025-12-02T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.787878 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.787950 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.787970 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.787999 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.788019 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:04Z","lastTransitionTime":"2025-12-02T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.890466 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.890523 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.890534 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.890552 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.890563 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:04Z","lastTransitionTime":"2025-12-02T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.994080 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.994138 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.994152 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.994173 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:04 crc kubenswrapper[4814]: I1202 14:30:04.994187 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:04Z","lastTransitionTime":"2025-12-02T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.097285 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.097359 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.097379 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.097409 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.097430 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:05Z","lastTransitionTime":"2025-12-02T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.145170 4814 scope.go:117] "RemoveContainer" containerID="6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9" Dec 02 14:30:05 crc kubenswrapper[4814]: E1202 14:30:05.145560 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.199993 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.200062 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.200080 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.200109 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.200131 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:05Z","lastTransitionTime":"2025-12-02T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.304125 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.304189 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.304201 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.304220 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.304250 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:05Z","lastTransitionTime":"2025-12-02T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.416741 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.416791 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.416804 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.416820 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.416830 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:05Z","lastTransitionTime":"2025-12-02T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.519872 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.519922 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.519933 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.519952 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.519964 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:05Z","lastTransitionTime":"2025-12-02T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.622983 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.623049 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.623062 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.623089 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.623104 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:05Z","lastTransitionTime":"2025-12-02T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.725887 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.725950 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.725968 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.725988 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.726003 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:05Z","lastTransitionTime":"2025-12-02T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.829153 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.829257 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.829323 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.829363 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.829388 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:05Z","lastTransitionTime":"2025-12-02T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.932247 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.932296 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.932307 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.932325 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:05 crc kubenswrapper[4814]: I1202 14:30:05.932341 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:05Z","lastTransitionTime":"2025-12-02T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.034809 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.034862 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.034874 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.034893 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.034906 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:06Z","lastTransitionTime":"2025-12-02T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.138709 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.138764 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.138785 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.138811 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.138827 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:06Z","lastTransitionTime":"2025-12-02T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.144026 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.144043 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.144183 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:06 crc kubenswrapper[4814]: E1202 14:30:06.144372 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.144409 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:06 crc kubenswrapper[4814]: E1202 14:30:06.144542 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:06 crc kubenswrapper[4814]: E1202 14:30:06.144613 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:06 crc kubenswrapper[4814]: E1202 14:30:06.144761 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.241271 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.241319 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.241330 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.241349 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.241362 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:06Z","lastTransitionTime":"2025-12-02T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.343960 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.344014 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.344024 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.344044 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.344055 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:06Z","lastTransitionTime":"2025-12-02T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.447957 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.448009 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.448019 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.448038 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.448054 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:06Z","lastTransitionTime":"2025-12-02T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.551925 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.551991 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.552006 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.552029 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.552043 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:06Z","lastTransitionTime":"2025-12-02T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.654736 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.654787 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.654797 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.654815 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.654825 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:06Z","lastTransitionTime":"2025-12-02T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.757769 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.757842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.757852 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.757870 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.757880 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:06Z","lastTransitionTime":"2025-12-02T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.861621 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.861678 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.861690 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.861709 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.861720 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:06Z","lastTransitionTime":"2025-12-02T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.964457 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.964533 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.964551 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.964584 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:06 crc kubenswrapper[4814]: I1202 14:30:06.964600 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:06Z","lastTransitionTime":"2025-12-02T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.067605 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.067670 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.067680 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.067700 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.067711 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:07Z","lastTransitionTime":"2025-12-02T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.171006 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.171054 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.171065 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.171082 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.171095 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:07Z","lastTransitionTime":"2025-12-02T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.274002 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.274053 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.274062 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.274083 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.274095 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:07Z","lastTransitionTime":"2025-12-02T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.376992 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.377037 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.377046 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.377065 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.377082 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:07Z","lastTransitionTime":"2025-12-02T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.480058 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.480148 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.480158 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.480178 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.480190 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:07Z","lastTransitionTime":"2025-12-02T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.582158 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.582214 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.582260 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.582293 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.582305 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:07Z","lastTransitionTime":"2025-12-02T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.685007 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.685053 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.685066 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.685086 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.685097 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:07Z","lastTransitionTime":"2025-12-02T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.787476 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.787526 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.787538 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.787559 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.787580 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:07Z","lastTransitionTime":"2025-12-02T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.890888 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.890959 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.890976 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.890998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.891013 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:07Z","lastTransitionTime":"2025-12-02T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.923779 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.923836 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.923848 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.923877 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.923891 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:07Z","lastTransitionTime":"2025-12-02T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:07 crc kubenswrapper[4814]: E1202 14:30:07.938069 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:07Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.942581 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.942641 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.942652 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.942675 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.942689 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:07Z","lastTransitionTime":"2025-12-02T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:07 crc kubenswrapper[4814]: E1202 14:30:07.958584 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:07Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.963915 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.963964 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.963977 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.963999 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.964013 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:07Z","lastTransitionTime":"2025-12-02T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:07 crc kubenswrapper[4814]: E1202 14:30:07.981073 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:07Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.985909 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.985976 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.986076 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.986103 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:07 crc kubenswrapper[4814]: I1202 14:30:07.986122 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:07Z","lastTransitionTime":"2025-12-02T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:08 crc kubenswrapper[4814]: E1202 14:30:08.004883 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.009675 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.009712 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.009729 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.009754 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.009771 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:08Z","lastTransitionTime":"2025-12-02T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:08 crc kubenswrapper[4814]: E1202 14:30:08.025750 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: E1202 14:30:08.025902 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.027556 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.027585 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.027599 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.027618 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.027633 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:08Z","lastTransitionTime":"2025-12-02T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.104625 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs\") pod \"network-metrics-daemon-5n4wm\" (UID: \"2ad0a031-3e26-4596-a7a9-cd00a95b7650\") " pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:08 crc kubenswrapper[4814]: E1202 14:30:08.104815 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:30:08 crc kubenswrapper[4814]: E1202 14:30:08.104874 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs podName:2ad0a031-3e26-4596-a7a9-cd00a95b7650 nodeName:}" failed. No retries permitted until 2025-12-02 14:30:40.104857948 +0000 UTC m=+102.229220152 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs") pod "network-metrics-daemon-5n4wm" (UID: "2ad0a031-3e26-4596-a7a9-cd00a95b7650") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.130951 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.130982 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.130993 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.131008 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.131019 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:08Z","lastTransitionTime":"2025-12-02T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.144366 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.144438 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.144366 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:08 crc kubenswrapper[4814]: E1202 14:30:08.144484 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.144494 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:08 crc kubenswrapper[4814]: E1202 14:30:08.144593 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:08 crc kubenswrapper[4814]: E1202 14:30:08.144677 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:08 crc kubenswrapper[4814]: E1202 14:30:08.144748 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.159950 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5486af4-e8de-4a7b-a010-1addc7fe0f90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3085bff468e4e77202bec19f8991e1747d96a401ad61a47c7f5400dc210e9dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a770612ae67e1ad2a03714776995463980ed3dd8e46434adac446ec17495b21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6009cf24f811b4a07cf2ed259ceb0ff6d9a6a286ae7df4ea87d1873864c47100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.172941 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.186846 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.199659 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.214039 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.233722 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.234171 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.234207 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.234225 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.234269 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.234286 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:08Z","lastTransitionTime":"2025-12-02T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.249818 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.266639 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.285127 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.298083 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.320312 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"message\\\":\\\":29:50.173720 6451 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:50.171784 6451 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:50.174448 6451 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 14:29:50.174498 6451 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 14:29:50.174525 6451 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 14:29:50.174532 6451 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 14:29:50.174569 6451 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 14:29:50.174577 6451 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 14:29:50.174585 6451 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 14:29:50.174599 6451 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 14:29:50.174605 6451 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 14:29:50.174614 6451 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 14:29:50.174660 6451 factory.go:656] Stopping watch factory\\\\nI1202 14:29:50.174892 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:50.174944 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:50.175048 6451 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.332953 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.336599 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.336634 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.336669 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.336687 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.336698 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:08Z","lastTransitionTime":"2025-12-02T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.359593 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.390565 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.406056 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.421057 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.437579 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.439491 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.439515 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.439525 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.439559 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.439571 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:08Z","lastTransitionTime":"2025-12-02T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.453643 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.541975 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.542020 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.542245 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.542270 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.542287 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:08Z","lastTransitionTime":"2025-12-02T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.583110 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd6tj_733ab60d-6038-4a5b-9120-9c4ef1f47774/kube-multus/0.log" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.583175 4814 generic.go:334] "Generic (PLEG): container finished" podID="733ab60d-6038-4a5b-9120-9c4ef1f47774" containerID="b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e" exitCode=1 Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.583260 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd6tj" event={"ID":"733ab60d-6038-4a5b-9120-9c4ef1f47774","Type":"ContainerDied","Data":"b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e"} Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.583928 4814 scope.go:117] "RemoveContainer" containerID="b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.600773 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5486af4-e8de-4a7b-a010-1addc7fe0f90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3085bff468e4e77202bec19f8991e1747d96a401ad61a47c7f5400dc210e9dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a770612ae67e1ad2a03714776995463980ed3dd8e46434adac446ec17495b21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6009cf24f811b4a07cf2ed259ceb0ff6d9a6a286ae7df4ea87d1873864c47100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.627090 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.644395 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.647846 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.647914 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.647927 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.647972 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.647989 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:08Z","lastTransitionTime":"2025-12-02T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.662865 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.676161 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"message\\\":\\\"2025-12-02T14:29:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_aa0e508b-d667-42da-89f7-232456134f15\\\\n2025-12-02T14:29:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_aa0e508b-d667-42da-89f7-232456134f15 to /host/opt/cni/bin/\\\\n2025-12-02T14:29:23Z [verbose] multus-daemon started\\\\n2025-12-02T14:29:23Z [verbose] Readiness Indicator file check\\\\n2025-12-02T14:30:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.691012 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.705292 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.720895 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.737575 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.749725 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.751019 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.751184 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.751318 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.751411 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.751498 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:08Z","lastTransitionTime":"2025-12-02T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.768403 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"message\\\":\\\":29:50.173720 6451 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:50.171784 6451 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:50.174448 6451 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 14:29:50.174498 6451 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 14:29:50.174525 6451 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 14:29:50.174532 6451 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 14:29:50.174569 6451 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 14:29:50.174577 6451 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 14:29:50.174585 6451 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 14:29:50.174599 6451 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 14:29:50.174605 6451 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 14:29:50.174614 6451 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 14:29:50.174660 6451 factory.go:656] Stopping watch factory\\\\nI1202 14:29:50.174892 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:50.174944 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:50.175048 6451 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.778388 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.798993 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.811756 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.825992 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.838722 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.852177 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.853928 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.854020 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.854107 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.854183 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.854261 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:08Z","lastTransitionTime":"2025-12-02T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.862800 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:08Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.957207 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.957257 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.957266 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.957280 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:08 crc kubenswrapper[4814]: I1202 14:30:08.957289 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:08Z","lastTransitionTime":"2025-12-02T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.059754 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.059794 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.059806 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.059821 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.059831 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:09Z","lastTransitionTime":"2025-12-02T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.162748 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.163067 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.163200 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.163330 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.163422 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:09Z","lastTransitionTime":"2025-12-02T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.266060 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.266464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.266558 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.266649 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.266748 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:09Z","lastTransitionTime":"2025-12-02T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.369289 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.369332 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.369343 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.369359 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.369372 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:09Z","lastTransitionTime":"2025-12-02T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.472419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.472486 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.472498 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.472519 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.472534 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:09Z","lastTransitionTime":"2025-12-02T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.574794 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.575067 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.575132 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.575266 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.575346 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:09Z","lastTransitionTime":"2025-12-02T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.589251 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd6tj_733ab60d-6038-4a5b-9120-9c4ef1f47774/kube-multus/0.log" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.589325 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd6tj" event={"ID":"733ab60d-6038-4a5b-9120-9c4ef1f47774","Type":"ContainerStarted","Data":"856a317b4636fc96e9833f76c75051122ac8cde18987182481a998ea6c5824db"} Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.606285 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.620650 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5486af4-e8de-4a7b-a010-1addc7fe0f90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3085bff468e4e77202bec19f8991e1747d96a401ad61a47c7f5400dc210e9dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a770612ae67e1ad2a03714776995463980ed3dd8e46434adac446ec17495b21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6009cf24f811b4a07cf2ed259ceb0ff6d9a6a286ae7df4ea87d1873864c47100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.632654 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.648895 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856a317b4636fc96e9833f76c75051122ac8cde18987182481a998ea6c5824db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"message\\\":\\\"2025-12-02T14:29:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_aa0e508b-d667-42da-89f7-232456134f15\\\\n2025-12-02T14:29:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_aa0e508b-d667-42da-89f7-232456134f15 to /host/opt/cni/bin/\\\\n2025-12-02T14:29:23Z [verbose] multus-daemon started\\\\n2025-12-02T14:29:23Z [verbose] Readiness Indicator file check\\\\n2025-12-02T14:30:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.668873 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.678841 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.678916 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.678937 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.678988 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.679012 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:09Z","lastTransitionTime":"2025-12-02T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.687025 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.700748 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.714378 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.732660 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"message\\\":\\\":29:50.173720 6451 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:50.171784 6451 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:50.174448 6451 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 14:29:50.174498 6451 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 14:29:50.174525 6451 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 14:29:50.174532 6451 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 14:29:50.174569 6451 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 14:29:50.174577 6451 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 14:29:50.174585 6451 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 14:29:50.174599 6451 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 14:29:50.174605 6451 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 14:29:50.174614 6451 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 14:29:50.174660 6451 factory.go:656] Stopping watch factory\\\\nI1202 14:29:50.174892 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:50.174944 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:50.175048 6451 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.745250 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.759689 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.772437 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.782244 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.782282 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.782290 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.782307 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.782317 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:09Z","lastTransitionTime":"2025-12-02T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.788495 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.801798 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.815292 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.825805 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.844457 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.855728 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:09Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.884958 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.885006 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.885015 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.885031 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.885041 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:09Z","lastTransitionTime":"2025-12-02T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.987784 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.987831 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.987846 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.987871 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:09 crc kubenswrapper[4814]: I1202 14:30:09.987888 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:09Z","lastTransitionTime":"2025-12-02T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.091419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.091460 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.091471 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.091490 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.091504 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:10Z","lastTransitionTime":"2025-12-02T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.144070 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:10 crc kubenswrapper[4814]: E1202 14:30:10.144300 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.144591 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.144692 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:10 crc kubenswrapper[4814]: E1202 14:30:10.144807 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:10 crc kubenswrapper[4814]: E1202 14:30:10.144966 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.145042 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:10 crc kubenswrapper[4814]: E1202 14:30:10.145221 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.194150 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.194192 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.194203 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.194217 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.194254 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:10Z","lastTransitionTime":"2025-12-02T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.297416 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.297454 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.297464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.297483 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.297493 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:10Z","lastTransitionTime":"2025-12-02T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.400479 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.400536 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.400550 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.400569 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.400580 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:10Z","lastTransitionTime":"2025-12-02T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.503517 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.503578 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.503594 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.503617 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.503632 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:10Z","lastTransitionTime":"2025-12-02T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.606785 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.606836 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.606848 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.606868 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.606877 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:10Z","lastTransitionTime":"2025-12-02T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.709978 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.710036 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.710049 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.710068 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.710083 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:10Z","lastTransitionTime":"2025-12-02T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.813201 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.813313 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.813331 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.813428 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.813451 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:10Z","lastTransitionTime":"2025-12-02T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.916351 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.916434 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.916454 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.916488 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:10 crc kubenswrapper[4814]: I1202 14:30:10.916509 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:10Z","lastTransitionTime":"2025-12-02T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.020003 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.020071 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.020091 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.020119 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.020134 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:11Z","lastTransitionTime":"2025-12-02T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.123532 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.123587 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.123599 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.123623 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.123635 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:11Z","lastTransitionTime":"2025-12-02T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.226845 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.226906 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.226916 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.226952 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.226969 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:11Z","lastTransitionTime":"2025-12-02T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.329692 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.329761 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.329778 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.329807 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.329825 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:11Z","lastTransitionTime":"2025-12-02T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.434015 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.434092 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.434104 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.434125 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.434136 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:11Z","lastTransitionTime":"2025-12-02T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.537167 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.537215 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.537243 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.537264 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.537276 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:11Z","lastTransitionTime":"2025-12-02T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.640142 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.640196 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.640209 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.640564 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.640594 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:11Z","lastTransitionTime":"2025-12-02T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.754087 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.754256 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.754280 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.754310 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.754329 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:11Z","lastTransitionTime":"2025-12-02T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.857653 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.857700 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.857709 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.857729 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.857741 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:11Z","lastTransitionTime":"2025-12-02T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.961589 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.961647 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.961663 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.961690 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:11 crc kubenswrapper[4814]: I1202 14:30:11.961707 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:11Z","lastTransitionTime":"2025-12-02T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.064345 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.064393 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.064410 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.064430 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.064447 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:12Z","lastTransitionTime":"2025-12-02T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.144646 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.144661 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.144770 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.144885 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:12 crc kubenswrapper[4814]: E1202 14:30:12.145088 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:12 crc kubenswrapper[4814]: E1202 14:30:12.145220 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:12 crc kubenswrapper[4814]: E1202 14:30:12.145337 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:12 crc kubenswrapper[4814]: E1202 14:30:12.145455 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.167428 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.167504 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.167528 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.167559 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.167589 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:12Z","lastTransitionTime":"2025-12-02T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.271031 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.271109 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.271128 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.271157 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.271177 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:12Z","lastTransitionTime":"2025-12-02T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.375278 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.375335 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.375355 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.375393 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.375414 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:12Z","lastTransitionTime":"2025-12-02T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.478517 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.478584 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.478598 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.478621 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.478635 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:12Z","lastTransitionTime":"2025-12-02T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.582120 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.582159 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.582169 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.582186 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.582197 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:12Z","lastTransitionTime":"2025-12-02T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.685332 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.685408 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.685427 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.685460 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.685482 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:12Z","lastTransitionTime":"2025-12-02T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.788719 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.788798 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.788821 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.788852 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.788879 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:12Z","lastTransitionTime":"2025-12-02T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.892886 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.892950 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.892971 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.893001 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.893023 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:12Z","lastTransitionTime":"2025-12-02T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.996353 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.996424 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.996448 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.996486 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:12 crc kubenswrapper[4814]: I1202 14:30:12.996516 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:12Z","lastTransitionTime":"2025-12-02T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.100919 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.101888 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.102012 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.102142 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.102275 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:13Z","lastTransitionTime":"2025-12-02T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.206986 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.207055 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.207067 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.207090 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.207107 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:13Z","lastTransitionTime":"2025-12-02T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.309765 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.309879 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.309910 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.309948 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.309975 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:13Z","lastTransitionTime":"2025-12-02T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.412460 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.412536 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.412561 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.412593 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.412618 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:13Z","lastTransitionTime":"2025-12-02T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.516041 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.516128 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.516155 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.516186 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.516210 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:13Z","lastTransitionTime":"2025-12-02T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.635908 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.635963 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.635976 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.635995 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.636009 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:13Z","lastTransitionTime":"2025-12-02T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.740321 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.740388 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.740411 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.740455 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.740481 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:13Z","lastTransitionTime":"2025-12-02T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.844010 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.844729 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.844799 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.844888 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.844964 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:13Z","lastTransitionTime":"2025-12-02T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.947805 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.947860 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.947873 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.947892 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:13 crc kubenswrapper[4814]: I1202 14:30:13.947905 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:13Z","lastTransitionTime":"2025-12-02T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.050668 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.050713 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.050724 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.050742 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.050753 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:14Z","lastTransitionTime":"2025-12-02T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.144719 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.144789 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:14 crc kubenswrapper[4814]: E1202 14:30:14.144916 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.145039 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.145060 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:14 crc kubenswrapper[4814]: E1202 14:30:14.145225 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:14 crc kubenswrapper[4814]: E1202 14:30:14.145503 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:14 crc kubenswrapper[4814]: E1202 14:30:14.145550 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.153005 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.153061 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.153084 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.153113 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.153142 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:14Z","lastTransitionTime":"2025-12-02T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.259401 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.259489 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.259500 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.259523 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.259538 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:14Z","lastTransitionTime":"2025-12-02T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.363276 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.363628 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.363740 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.364382 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.364535 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:14Z","lastTransitionTime":"2025-12-02T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.468570 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.468661 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.468684 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.468714 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.468739 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:14Z","lastTransitionTime":"2025-12-02T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.572145 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.572268 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.572292 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.572323 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.572344 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:14Z","lastTransitionTime":"2025-12-02T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.674995 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.675031 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.675039 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.675054 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.675063 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:14Z","lastTransitionTime":"2025-12-02T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.776977 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.777016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.777027 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.777045 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.777056 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:14Z","lastTransitionTime":"2025-12-02T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.878744 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.878775 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.878784 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.878798 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.878808 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:14Z","lastTransitionTime":"2025-12-02T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.981764 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.981844 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.981868 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.981902 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:14 crc kubenswrapper[4814]: I1202 14:30:14.981924 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:14Z","lastTransitionTime":"2025-12-02T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.085676 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.085805 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.085824 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.085852 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.085891 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:15Z","lastTransitionTime":"2025-12-02T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.189211 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.189330 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.189344 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.189366 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.189379 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:15Z","lastTransitionTime":"2025-12-02T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.292159 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.292532 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.292637 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.292782 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.292873 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:15Z","lastTransitionTime":"2025-12-02T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.396378 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.396470 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.396493 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.396561 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.396584 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:15Z","lastTransitionTime":"2025-12-02T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.499517 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.499556 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.499565 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.499579 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.499589 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:15Z","lastTransitionTime":"2025-12-02T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.602770 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.602849 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.602871 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.602896 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.602913 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:15Z","lastTransitionTime":"2025-12-02T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.705266 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.705338 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.705362 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.705395 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.705419 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:15Z","lastTransitionTime":"2025-12-02T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.808398 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.808469 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.808491 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.808521 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.808544 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:15Z","lastTransitionTime":"2025-12-02T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.911873 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.911936 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.911959 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.911988 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:15 crc kubenswrapper[4814]: I1202 14:30:15.912011 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:15Z","lastTransitionTime":"2025-12-02T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.015738 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.015825 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.015848 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.015882 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.015907 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:16Z","lastTransitionTime":"2025-12-02T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.118944 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.119047 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.119070 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.119097 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.119117 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:16Z","lastTransitionTime":"2025-12-02T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.144425 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.144585 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.144630 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:16 crc kubenswrapper[4814]: E1202 14:30:16.144811 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.144884 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:16 crc kubenswrapper[4814]: E1202 14:30:16.145271 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:16 crc kubenswrapper[4814]: E1202 14:30:16.145376 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:16 crc kubenswrapper[4814]: E1202 14:30:16.145786 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.170378 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.224257 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.224341 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.224362 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.224404 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.224425 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:16Z","lastTransitionTime":"2025-12-02T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.329153 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.329281 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.329317 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.329352 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.329378 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:16Z","lastTransitionTime":"2025-12-02T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.433293 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.433410 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.433662 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.433753 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.433788 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:16Z","lastTransitionTime":"2025-12-02T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.537719 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.538155 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.538468 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.538694 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.538955 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:16Z","lastTransitionTime":"2025-12-02T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.642093 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.642165 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.642182 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.642208 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.642261 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:16Z","lastTransitionTime":"2025-12-02T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.746562 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.746639 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.746664 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.746699 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.746722 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:16Z","lastTransitionTime":"2025-12-02T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.850877 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.850950 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.850973 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.851002 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.851024 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:16Z","lastTransitionTime":"2025-12-02T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.953793 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.953863 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.953886 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.953916 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:16 crc kubenswrapper[4814]: I1202 14:30:16.953938 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:16Z","lastTransitionTime":"2025-12-02T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.057394 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.057470 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.057494 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.057523 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.057549 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:17Z","lastTransitionTime":"2025-12-02T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.160469 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.160543 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.160557 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.160583 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.160600 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:17Z","lastTransitionTime":"2025-12-02T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.263191 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.263263 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.263276 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.263296 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.263308 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:17Z","lastTransitionTime":"2025-12-02T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.366285 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.366327 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.366375 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.366430 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.366448 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:17Z","lastTransitionTime":"2025-12-02T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.470112 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.470188 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.470206 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.470263 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.470286 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:17Z","lastTransitionTime":"2025-12-02T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.573008 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.573073 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.573091 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.573118 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.573138 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:17Z","lastTransitionTime":"2025-12-02T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.676464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.676531 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.676545 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.676564 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.676577 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:17Z","lastTransitionTime":"2025-12-02T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.780056 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.780119 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.780132 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.780153 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.780168 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:17Z","lastTransitionTime":"2025-12-02T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.883405 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.883496 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.883522 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.883562 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.883583 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:17Z","lastTransitionTime":"2025-12-02T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.987175 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.988120 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.988771 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.989151 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:17 crc kubenswrapper[4814]: I1202 14:30:17.989518 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:17Z","lastTransitionTime":"2025-12-02T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.030101 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.030581 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.030850 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.031054 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.031187 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:18Z","lastTransitionTime":"2025-12-02T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:18 crc kubenswrapper[4814]: E1202 14:30:18.048784 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.053075 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.053352 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.053520 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.053674 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.053814 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:18Z","lastTransitionTime":"2025-12-02T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:18 crc kubenswrapper[4814]: E1202 14:30:18.072129 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.077873 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.077965 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.077987 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.078018 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.078034 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:18Z","lastTransitionTime":"2025-12-02T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:18 crc kubenswrapper[4814]: E1202 14:30:18.093534 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.099527 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.099599 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.099618 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.099647 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.099664 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:18Z","lastTransitionTime":"2025-12-02T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:18 crc kubenswrapper[4814]: E1202 14:30:18.117652 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.123436 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.123485 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.123498 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.123518 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.123532 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:18Z","lastTransitionTime":"2025-12-02T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:18 crc kubenswrapper[4814]: E1202 14:30:18.138390 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: E1202 14:30:18.138516 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.140102 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.140155 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.140170 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.140193 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.140207 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:18Z","lastTransitionTime":"2025-12-02T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.143779 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.143870 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.143884 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:18 crc kubenswrapper[4814]: E1202 14:30:18.144013 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.144110 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:18 crc kubenswrapper[4814]: E1202 14:30:18.144214 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:18 crc kubenswrapper[4814]: E1202 14:30:18.144440 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:18 crc kubenswrapper[4814]: E1202 14:30:18.144742 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.155509 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.173540 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"message\\\":\\\":29:50.173720 6451 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:50.171784 6451 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:50.174448 6451 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 14:29:50.174498 6451 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 14:29:50.174525 6451 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 14:29:50.174532 6451 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 14:29:50.174569 6451 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 14:29:50.174577 6451 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 14:29:50.174585 6451 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 14:29:50.174599 6451 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 14:29:50.174605 6451 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 14:29:50.174614 6451 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 14:29:50.174660 6451 factory.go:656] Stopping watch factory\\\\nI1202 14:29:50.174892 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:50.174944 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:50.175048 6451 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.183396 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.196985 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.210270 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.224649 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.235941 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.242750 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.242809 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.242823 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.242845 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.242858 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:18Z","lastTransitionTime":"2025-12-02T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.255304 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.268089 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.288472 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.300195 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.314172 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.329482 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5486af4-e8de-4a7b-a010-1addc7fe0f90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3085bff468e4e77202bec19f8991e1747d96a401ad61a47c7f5400dc210e9dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a770612ae67e1ad2a03714776995463980ed3dd8e46434adac446ec17495b21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6009cf24f811b4a07cf2ed259ceb0ff6d9a6a286ae7df4ea87d1873864c47100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.343278 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.346026 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.346075 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.346090 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.346366 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.346651 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:18Z","lastTransitionTime":"2025-12-02T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.357408 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856a317b4636fc96e9833f76c75051122ac8cde18987182481a998ea6c5824db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"message\\\":\\\"2025-12-02T14:29:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_aa0e508b-d667-42da-89f7-232456134f15\\\\n2025-12-02T14:29:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_aa0e508b-d667-42da-89f7-232456134f15 to /host/opt/cni/bin/\\\\n2025-12-02T14:29:23Z [verbose] multus-daemon started\\\\n2025-12-02T14:29:23Z [verbose] Readiness Indicator file check\\\\n2025-12-02T14:30:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.373450 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.385900 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.396422 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.407635 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfeb12a3-b48d-4cd2-9d82-2c5cfc4cff0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921bb8cb7d4562e290648eb6e0f3eed3c7c8c12ad094d68cdf26d81b2ce9e693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05d3282ab63cac018ed36a85a203552ed6a576a5f093e338b48392cd08131883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05d3282ab63cac018ed36a85a203552ed6a576a5f093e338b48392cd08131883\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:18Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.449996 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.450092 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.450123 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.450164 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.450188 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:18Z","lastTransitionTime":"2025-12-02T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.553740 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.553813 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.553834 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.553866 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.553890 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:18Z","lastTransitionTime":"2025-12-02T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.656183 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.656298 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.656324 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.656357 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.656379 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:18Z","lastTransitionTime":"2025-12-02T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.759452 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.759534 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.759559 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.759585 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.759604 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:18Z","lastTransitionTime":"2025-12-02T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.862853 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.862891 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.862903 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.862920 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.862932 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:18Z","lastTransitionTime":"2025-12-02T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.966006 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.966078 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.966124 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.966156 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:18 crc kubenswrapper[4814]: I1202 14:30:18.966180 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:18Z","lastTransitionTime":"2025-12-02T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.069590 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.069648 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.069665 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.069688 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.069708 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:19Z","lastTransitionTime":"2025-12-02T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.172542 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.172601 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.172613 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.172634 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.172650 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:19Z","lastTransitionTime":"2025-12-02T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.275580 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.275641 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.275666 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.275699 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.275721 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:19Z","lastTransitionTime":"2025-12-02T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.377970 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.378021 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.378033 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.378053 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.378068 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:19Z","lastTransitionTime":"2025-12-02T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.480728 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.480804 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.480823 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.480852 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.480871 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:19Z","lastTransitionTime":"2025-12-02T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.583769 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.583841 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.583858 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.583884 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.583903 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:19Z","lastTransitionTime":"2025-12-02T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.687284 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.687349 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.687367 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.687393 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.687413 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:19Z","lastTransitionTime":"2025-12-02T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.790819 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.790883 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.790895 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.790920 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.790936 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:19Z","lastTransitionTime":"2025-12-02T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.893668 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.893720 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.893730 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.893748 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.893759 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:19Z","lastTransitionTime":"2025-12-02T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.996374 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.996454 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.996467 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.996563 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:19 crc kubenswrapper[4814]: I1202 14:30:19.996584 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:19Z","lastTransitionTime":"2025-12-02T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.099199 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.099263 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.099273 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.099309 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.099318 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:20Z","lastTransitionTime":"2025-12-02T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.139624 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.139889 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.139842969 +0000 UTC m=+146.264205173 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.143814 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.143851 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.144035 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.144051 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.144049 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.144366 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.144483 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.144698 4814 scope.go:117] "RemoveContainer" containerID="6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9" Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.144775 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.201778 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.202116 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.202128 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.202149 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.202164 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:20Z","lastTransitionTime":"2025-12-02T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.241169 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.241214 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.241269 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.241289 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.241291 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.241372 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.24134728 +0000 UTC m=+146.365709504 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.241403 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.241431 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.241424222 +0000 UTC m=+146.365786416 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.241456 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.241491 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.241505 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.241514 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.241553 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.241568 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.241588 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.241564645 +0000 UTC m=+146.365926849 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:30:20 crc kubenswrapper[4814]: E1202 14:30:20.241631 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.241611896 +0000 UTC m=+146.365974100 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.304274 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.304334 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.304345 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.304361 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.304370 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:20Z","lastTransitionTime":"2025-12-02T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.407299 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.407342 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.407353 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.407370 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.407381 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:20Z","lastTransitionTime":"2025-12-02T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.511166 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.511219 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.511252 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.511275 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.511290 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:20Z","lastTransitionTime":"2025-12-02T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.613999 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.614052 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.614063 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.614076 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.614087 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:20Z","lastTransitionTime":"2025-12-02T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.666169 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/2.log" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.668401 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerStarted","Data":"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab"} Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.669288 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.690512 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfeb12a3-b48d-4cd2-9d82-2c5cfc4cff0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921bb8cb7d4562e290648eb6e0f3eed3c7c8c12ad094d68cdf26d81b2ce9e693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05d3282ab63cac018ed36a85a203552ed6a576a5f093e338b48392cd08131883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05d3282ab63cac018ed36a85a203552ed6a576a5f093e338b48392cd08131883\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.706220 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.716773 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.716840 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.716856 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.716874 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.716886 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:20Z","lastTransitionTime":"2025-12-02T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.721551 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856a317b4636fc96e9833f76c75051122ac8cde18987182481a998ea6c5824db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"message\\\":\\\"2025-12-02T14:29:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_aa0e508b-d667-42da-89f7-232456134f15\\\\n2025-12-02T14:29:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_aa0e508b-d667-42da-89f7-232456134f15 to /host/opt/cni/bin/\\\\n2025-12-02T14:29:23Z [verbose] multus-daemon started\\\\n2025-12-02T14:29:23Z [verbose] Readiness Indicator file check\\\\n2025-12-02T14:30:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.740755 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.756852 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.772003 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.791196 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.805561 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.819402 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.819487 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.819503 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.819527 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.819544 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:20Z","lastTransitionTime":"2025-12-02T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.829104 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"message\\\":\\\":29:50.173720 6451 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:50.171784 6451 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:50.174448 6451 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 14:29:50.174498 6451 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 14:29:50.174525 6451 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 14:29:50.174532 6451 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 14:29:50.174569 6451 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 14:29:50.174577 6451 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 14:29:50.174585 6451 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 14:29:50.174599 6451 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 14:29:50.174605 6451 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 14:29:50.174614 6451 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 14:29:50.174660 6451 factory.go:656] Stopping watch factory\\\\nI1202 14:29:50.174892 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:50.174944 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:50.175048 6451 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.864186 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.895445 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.912868 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.921257 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.921297 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.921307 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.921324 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.921337 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:20Z","lastTransitionTime":"2025-12-02T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.926390 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.940548 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.962801 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.973732 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:20 crc kubenswrapper[4814]: I1202 14:30:20.987579 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5486af4-e8de-4a7b-a010-1addc7fe0f90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3085bff468e4e77202bec19f8991e1747d96a401ad61a47c7f5400dc210e9dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a770612ae67e1ad2a03714776995463980ed3dd8e46434adac446ec17495b21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6009cf24f811b4a07cf2ed259ceb0ff6d9a6a286ae7df4ea87d1873864c47100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:20Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.002836 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.020035 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.024111 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.024168 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.024186 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.024223 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.024263 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:21Z","lastTransitionTime":"2025-12-02T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.127544 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.127597 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.127606 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.127624 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.127634 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:21Z","lastTransitionTime":"2025-12-02T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.230603 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.230665 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.230679 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.230705 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.230720 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:21Z","lastTransitionTime":"2025-12-02T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.334298 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.334359 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.334373 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.334951 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.334992 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:21Z","lastTransitionTime":"2025-12-02T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.440025 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.440079 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.440098 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.440125 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.440145 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:21Z","lastTransitionTime":"2025-12-02T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.544371 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.544403 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.544414 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.544430 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.544443 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:21Z","lastTransitionTime":"2025-12-02T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.648628 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.648987 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.649084 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.649173 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.649297 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:21Z","lastTransitionTime":"2025-12-02T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.677148 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/3.log" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.678729 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/2.log" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.682645 4814 generic.go:334] "Generic (PLEG): container finished" podID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerID="695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab" exitCode=1 Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.682710 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerDied","Data":"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab"} Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.683006 4814 scope.go:117] "RemoveContainer" containerID="6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.683673 4814 scope.go:117] "RemoveContainer" containerID="695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab" Dec 02 14:30:21 crc kubenswrapper[4814]: E1202 14:30:21.683879 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.703678 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5486af4-e8de-4a7b-a010-1addc7fe0f90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3085bff468e4e77202bec19f8991e1747d96a401ad61a47c7f5400dc210e9dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a770612ae67e1ad2a03714776995463980ed3dd8e46434adac446ec17495b21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6009cf24f811b4a07cf2ed259ceb0ff6d9a6a286ae7df4ea87d1873864c47100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.721462 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.737365 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.752004 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfeb12a3-b48d-4cd2-9d82-2c5cfc4cff0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921bb8cb7d4562e290648eb6e0f3eed3c7c8c12ad094d68cdf26d81b2ce9e693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05d3282ab63cac018ed36a85a203552ed6a576a5f093e338b48392cd08131883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05d3282ab63cac018ed36a85a203552ed6a576a5f093e338b48392cd08131883\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.753693 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.754096 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.754283 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.754444 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.754728 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:21Z","lastTransitionTime":"2025-12-02T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.769205 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.787250 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856a317b4636fc96e9833f76c75051122ac8cde18987182481a998ea6c5824db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"message\\\":\\\"2025-12-02T14:29:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_aa0e508b-d667-42da-89f7-232456134f15\\\\n2025-12-02T14:29:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_aa0e508b-d667-42da-89f7-232456134f15 to /host/opt/cni/bin/\\\\n2025-12-02T14:29:23Z [verbose] multus-daemon started\\\\n2025-12-02T14:29:23Z [verbose] Readiness Indicator file check\\\\n2025-12-02T14:30:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.807544 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.821494 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.831657 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.845469 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.857142 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.857190 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.857203 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.857242 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.857257 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:21Z","lastTransitionTime":"2025-12-02T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.860426 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.883016 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c80198de0db7381bbef67f5a5335f4c4442aac45724344fd000a8eb64d347e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"message\\\":\\\":29:50.173720 6451 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1202 14:29:50.171784 6451 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 14:29:50.174448 6451 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 14:29:50.174498 6451 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 14:29:50.174525 6451 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 14:29:50.174532 6451 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 14:29:50.174569 6451 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 14:29:50.174577 6451 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 14:29:50.174585 6451 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 14:29:50.174599 6451 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 14:29:50.174605 6451 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 14:29:50.174614 6451 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 14:29:50.174660 6451 factory.go:656] Stopping watch factory\\\\nI1202 14:29:50.174892 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1202 14:29:50.174944 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 14:29:50.175048 6451 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:30:21Z\\\",\\\"message\\\":\\\"AT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 14:30:21.043098 6860 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z]\\\\nI1202 14:30:21.042758 6860 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1202 14:30:21.043114 6860 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.896753 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.921370 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.940476 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.959730 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.959776 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.959791 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.959813 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.959828 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:21Z","lastTransitionTime":"2025-12-02T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.962390 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:21 crc kubenswrapper[4814]: I1202 14:30:21.980199 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.000676 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.018504 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.062560 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.062633 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.062658 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.062688 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.062707 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:22Z","lastTransitionTime":"2025-12-02T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.144836 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.144975 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.144975 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:22 crc kubenswrapper[4814]: E1202 14:30:22.145131 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.145499 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:22 crc kubenswrapper[4814]: E1202 14:30:22.145621 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:22 crc kubenswrapper[4814]: E1202 14:30:22.145842 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:22 crc kubenswrapper[4814]: E1202 14:30:22.146048 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.165842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.165925 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.165950 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.165986 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.166016 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:22Z","lastTransitionTime":"2025-12-02T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.269927 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.270002 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.270024 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.270057 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.270082 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:22Z","lastTransitionTime":"2025-12-02T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.378078 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.378166 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.378193 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.378271 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.378301 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:22Z","lastTransitionTime":"2025-12-02T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.481972 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.482055 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.482080 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.482112 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.482135 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:22Z","lastTransitionTime":"2025-12-02T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.585471 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.585572 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.585597 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.585625 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.585646 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:22Z","lastTransitionTime":"2025-12-02T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.688345 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.688401 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.688418 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.688446 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.688469 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:22Z","lastTransitionTime":"2025-12-02T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.691156 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/3.log" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.697096 4814 scope.go:117] "RemoveContainer" containerID="695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab" Dec 02 14:30:22 crc kubenswrapper[4814]: E1202 14:30:22.698288 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.722003 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.739210 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.754428 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.775395 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfeb12a3-b48d-4cd2-9d82-2c5cfc4cff0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921bb8cb7d4562e290648eb6e0f3eed3c7c8c12ad094d68cdf26d81b2ce9e693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05d3282ab63cac018ed36a85a203552ed6a576a5f093e338b48392cd08131883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05d3282ab63cac018ed36a85a203552ed6a576a5f093e338b48392cd08131883\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.791939 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.791983 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.792009 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.792026 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.792037 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:22Z","lastTransitionTime":"2025-12-02T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.793443 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.809904 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856a317b4636fc96e9833f76c75051122ac8cde18987182481a998ea6c5824db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"message\\\":\\\"2025-12-02T14:29:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_aa0e508b-d667-42da-89f7-232456134f15\\\\n2025-12-02T14:29:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_aa0e508b-d667-42da-89f7-232456134f15 to /host/opt/cni/bin/\\\\n2025-12-02T14:29:23Z [verbose] multus-daemon started\\\\n2025-12-02T14:29:23Z [verbose] Readiness Indicator file check\\\\n2025-12-02T14:30:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.821533 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.836505 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.849972 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.871109 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:30:21Z\\\",\\\"message\\\":\\\"AT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 14:30:21.043098 6860 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z]\\\\nI1202 14:30:21.042758 6860 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1202 14:30:21.043114 6860 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:30:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.886195 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.895395 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.895428 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.895436 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.895452 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.895463 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:22Z","lastTransitionTime":"2025-12-02T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:22 crc kubenswrapper[4814]: I1202 14:30:22.903428 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:22Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.106472 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.106523 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.106537 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.106560 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.106576 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:23Z","lastTransitionTime":"2025-12-02T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.114165 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.132730 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.144870 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.157051 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.170821 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5486af4-e8de-4a7b-a010-1addc7fe0f90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3085bff468e4e77202bec19f8991e1747d96a401ad61a47c7f5400dc210e9dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a770612ae67e1ad2a03714776995463980ed3dd8e46434adac446ec17495b21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6009cf24f811b4a07cf2ed259ceb0ff6d9a6a286ae7df4ea87d1873864c47100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.184633 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.198868 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:23Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.209604 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.209645 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.209662 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.209680 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.209691 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:23Z","lastTransitionTime":"2025-12-02T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.313764 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.313842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.313868 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.313900 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.313925 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:23Z","lastTransitionTime":"2025-12-02T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.417050 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.417119 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.417143 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.417176 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.417200 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:23Z","lastTransitionTime":"2025-12-02T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.521113 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.521187 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.521216 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.521283 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.521309 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:23Z","lastTransitionTime":"2025-12-02T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.624218 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.624295 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.624308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.624328 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.624340 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:23Z","lastTransitionTime":"2025-12-02T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.727392 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.727456 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.727468 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.727489 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.727501 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:23Z","lastTransitionTime":"2025-12-02T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.830640 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.830715 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.830726 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.830749 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.830764 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:23Z","lastTransitionTime":"2025-12-02T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.934518 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.934577 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.934591 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.934611 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:23 crc kubenswrapper[4814]: I1202 14:30:23.934626 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:23Z","lastTransitionTime":"2025-12-02T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.039510 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.039600 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.039621 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.039657 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.039684 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:24Z","lastTransitionTime":"2025-12-02T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.143418 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.143491 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.143514 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.143545 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.143566 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:24Z","lastTransitionTime":"2025-12-02T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.143761 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.143838 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.143871 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.143761 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:24 crc kubenswrapper[4814]: E1202 14:30:24.143966 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:24 crc kubenswrapper[4814]: E1202 14:30:24.144101 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:24 crc kubenswrapper[4814]: E1202 14:30:24.144222 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:24 crc kubenswrapper[4814]: E1202 14:30:24.144443 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.246743 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.246814 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.246837 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.246902 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.246927 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:24Z","lastTransitionTime":"2025-12-02T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.349861 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.349940 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.349960 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.349989 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.350013 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:24Z","lastTransitionTime":"2025-12-02T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.454253 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.454292 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.454303 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.454321 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.454332 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:24Z","lastTransitionTime":"2025-12-02T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.557579 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.557665 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.557688 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.557723 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.557755 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:24Z","lastTransitionTime":"2025-12-02T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.660676 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.660728 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.660742 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.660770 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.660788 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:24Z","lastTransitionTime":"2025-12-02T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.767730 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.768264 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.768278 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.768299 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.768312 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:24Z","lastTransitionTime":"2025-12-02T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.871190 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.871279 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.871292 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.871311 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.871325 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:24Z","lastTransitionTime":"2025-12-02T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.974338 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.974402 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.974414 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.974439 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:24 crc kubenswrapper[4814]: I1202 14:30:24.974451 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:24Z","lastTransitionTime":"2025-12-02T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.078301 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.078416 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.078448 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.078480 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.078497 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:25Z","lastTransitionTime":"2025-12-02T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.183490 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.183564 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.183589 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.183622 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.183645 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:25Z","lastTransitionTime":"2025-12-02T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.288631 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.288708 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.288728 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.288754 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.288781 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:25Z","lastTransitionTime":"2025-12-02T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.392213 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.392272 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.392285 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.392303 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.392317 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:25Z","lastTransitionTime":"2025-12-02T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.495473 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.495827 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.495914 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.496020 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.496103 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:25Z","lastTransitionTime":"2025-12-02T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.599333 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.599403 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.599416 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.599435 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.599448 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:25Z","lastTransitionTime":"2025-12-02T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.702748 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.702790 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.702801 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.702821 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.702830 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:25Z","lastTransitionTime":"2025-12-02T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.806880 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.806930 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.806941 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.806962 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.806975 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:25Z","lastTransitionTime":"2025-12-02T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.909679 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.909741 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.909753 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.909772 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:25 crc kubenswrapper[4814]: I1202 14:30:25.909785 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:25Z","lastTransitionTime":"2025-12-02T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.013107 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.013176 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.013198 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.013273 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.013304 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:26Z","lastTransitionTime":"2025-12-02T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.116700 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.116764 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.116778 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.116805 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.116820 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:26Z","lastTransitionTime":"2025-12-02T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.144493 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.144592 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.144681 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.144596 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:26 crc kubenswrapper[4814]: E1202 14:30:26.144871 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:26 crc kubenswrapper[4814]: E1202 14:30:26.145113 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:26 crc kubenswrapper[4814]: E1202 14:30:26.145202 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:26 crc kubenswrapper[4814]: E1202 14:30:26.145394 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.219574 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.219618 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.219634 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.219655 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.219670 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:26Z","lastTransitionTime":"2025-12-02T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.322576 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.322629 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.322642 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.322660 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.322671 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:26Z","lastTransitionTime":"2025-12-02T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.425999 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.426113 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.426125 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.426146 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.426161 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:26Z","lastTransitionTime":"2025-12-02T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.529638 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.529731 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.529775 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.529813 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.529845 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:26Z","lastTransitionTime":"2025-12-02T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.633218 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.633296 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.633310 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.633329 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.633340 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:26Z","lastTransitionTime":"2025-12-02T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.737171 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.737309 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.737342 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.737381 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.737454 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:26Z","lastTransitionTime":"2025-12-02T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.840637 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.840701 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.840713 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.840737 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.840751 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:26Z","lastTransitionTime":"2025-12-02T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.943686 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.943732 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.943742 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.943761 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:26 crc kubenswrapper[4814]: I1202 14:30:26.943773 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:26Z","lastTransitionTime":"2025-12-02T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.046998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.047052 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.047066 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.047087 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.047102 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:27Z","lastTransitionTime":"2025-12-02T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.151485 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.151542 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.151554 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.151572 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.151587 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:27Z","lastTransitionTime":"2025-12-02T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.254342 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.254443 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.254461 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.254489 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.254512 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:27Z","lastTransitionTime":"2025-12-02T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.358004 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.358050 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.358061 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.358079 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.358092 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:27Z","lastTransitionTime":"2025-12-02T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.461487 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.461548 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.461565 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.461585 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.461598 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:27Z","lastTransitionTime":"2025-12-02T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.564308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.564362 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.564376 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.564397 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.564412 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:27Z","lastTransitionTime":"2025-12-02T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.667770 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.667826 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.667840 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.667871 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.667886 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:27Z","lastTransitionTime":"2025-12-02T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.770612 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.770680 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.770698 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.770725 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.770745 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:27Z","lastTransitionTime":"2025-12-02T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.873558 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.873657 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.873683 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.873714 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.873736 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:27Z","lastTransitionTime":"2025-12-02T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.979741 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.979797 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.979806 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.979830 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:27 crc kubenswrapper[4814]: I1202 14:30:27.979842 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:27Z","lastTransitionTime":"2025-12-02T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.083144 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.083189 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.083200 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.083216 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.083241 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:28Z","lastTransitionTime":"2025-12-02T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.144032 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.144079 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.144132 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.144175 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:28 crc kubenswrapper[4814]: E1202 14:30:28.144184 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:28 crc kubenswrapper[4814]: E1202 14:30:28.144271 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:28 crc kubenswrapper[4814]: E1202 14:30:28.144677 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:28 crc kubenswrapper[4814]: E1202 14:30:28.144711 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.164318 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5486af4-e8de-4a7b-a010-1addc7fe0f90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3085bff468e4e77202bec19f8991e1747d96a401ad61a47c7f5400dc210e9dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a770612ae67e1ad2a03714776995463980ed3dd8e46434adac446ec17495b21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6009cf24f811b4a07cf2ed259ceb0ff6d9a6a286ae7df4ea87d1873864c47100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dae4ed249a5fddca9ac879b01bba21f9e56214a15961ba6893f20a7eeb15728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.181025 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536f7abeb9f87004d6d7be1f102aa40636e37f8b4c190542b58eb5686dad5370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.185604 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.185676 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.185703 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.185745 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.185770 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:28Z","lastTransitionTime":"2025-12-02T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.199951 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad6048dd9f232716f7bf4f1e9c1330d93d6bd73dde546f9e8abb645f73f74ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e168c8a7e043d565e51aa448821180e789eaadf20fc32fc79fe67edca980d1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.214980 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bg695" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a66bac68-7ada-40a6-8407-54161c4ff527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5669928eb1232b02cba8228f46a15a3cf6fdaec82be752f4c19d65f46be1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hh2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bg695\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.231527 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfeb12a3-b48d-4cd2-9d82-2c5cfc4cff0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921bb8cb7d4562e290648eb6e0f3eed3c7c8c12ad094d68cdf26d81b2ce9e693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05d3282ab63cac018ed36a85a203552ed6a576a5f093e338b48392cd08131883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05d3282ab63cac018ed36a85a203552ed6a576a5f093e338b48392cd08131883\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.253807 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.275220 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd6tj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733ab60d-6038-4a5b-9120-9c4ef1f47774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856a317b4636fc96e9833f76c75051122ac8cde18987182481a998ea6c5824db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:30:08Z\\\",\\\"message\\\":\\\"2025-12-02T14:29:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_aa0e508b-d667-42da-89f7-232456134f15\\\\n2025-12-02T14:29:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_aa0e508b-d667-42da-89f7-232456134f15 to /host/opt/cni/bin/\\\\n2025-12-02T14:29:23Z [verbose] multus-daemon started\\\\n2025-12-02T14:29:23Z [verbose] Readiness Indicator file check\\\\n2025-12-02T14:30:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bp6gp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd6tj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.288011 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.288064 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.288077 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.288097 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.288112 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:28Z","lastTransitionTime":"2025-12-02T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.301438 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3acbdd1e-1a6f-407f-be68-b570b5fde941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f22b16c5c271c26f529600b12ab558ae0c60951fca6f535b696b5b1115f4bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ee4dbe54edfc6db4a1cdad7bb2f1e39b6b16874eddca75860cf919b828058a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba0e706c611b05956ab43bd4be7a0dfd9b9ec25b1962fa32b0650d93663b8d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cb76260e3f08610da34f8fc9ca50d9340e21ade8c0e01c47aa3660244c0574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17af295457428b3cb71c9dde13fe5d52c24e485649f4add1ae196d4e98e9c019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ff4530b0c449f78b807ccc9d1d782d38819347ca05e6e559764648eb6980dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8f3bc022491a0b7c31abd4e7f4335bd00e286706fad2f364a76d60da418878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrf9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k6qjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.323541 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93bbd5a-650e-4f70-9b1c-868df83a667c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d2600978d5fea6fa0ed521933ed609fca6a7f9a539692b86614002537df3ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmdbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k4kc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.342839 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aed12756-8a1c-422e-9a9f-87314f49c27d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW1202 14:29:15.290281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1202 14:29:15.290389 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 14:29:15.291902 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3106177940/tls.crt::/tmp/serving-cert-3106177940/tls.key\\\\\\\"\\\\nI1202 14:29:15.734436 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 14:29:15.736599 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 14:29:15.736616 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 14:29:15.736636 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 14:29:15.736641 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 14:29:15.740943 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 14:29:15.740968 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 14:29:15.740978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 14:29:15.740981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 14:29:15.740984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI1202 14:29:15.740984 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 14:29:15.740988 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 14:29:15.743158 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.362265 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfz4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddebe70e-e835-47d3-b46f-1177bc066d81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4cfaad9196864cb0d9e736991b52c0d4b7e1fe482795124187c8bffcb7c82c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbl4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfz4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.391171 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.391223 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.391254 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.391274 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.391288 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:28Z","lastTransitionTime":"2025-12-02T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.401647 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5cd5566-9b36-421b-8f02-f3d12db9b829\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T14:30:21Z\\\",\\\"message\\\":\\\"AT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 14:30:21.043098 6860 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:21Z is after 2025-08-24T17:21:41Z]\\\\nI1202 14:30:21.042758 6860 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1202 14:30:21.043114 6860 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T14:30:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lh6s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gnpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.422559 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad0a031-3e26-4596-a7a9-cd00a95b7650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlc4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5n4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.441786 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26f7d42-5c37-4ae2-a6a4-c5d7bda572e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c51c3a6690625e0485f495b314d6a6f650f503a120badabf80418b47e77d5bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1cd5f3b148daf6c46795d31f3f5df03b10861ca23111173fd83723df500849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:29:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhbnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.478415 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0166be50-3d30-41eb-b09a-c762bfc23130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babb96dc22071c8693890676df48fed8e8dca23a4f79ea2947b18318904a3190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a09663293d9677b2ed0cc75f857f74de6c4ae9feb7ec2e7911b5100c75f1e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab903d076cb456d29073a38c35bf34ce0a354039fd8790f37d5aabfc2b103285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d177b3b6d427a68f19c062fb55132448e2cd522998030c3f6889d492c323c097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d42ad1e1cb2998bf3a3bde2261505671c7f29adcd9ee908e750f76d8fe8159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da32307b52b89bd425ef54062ff728d1e0212e25f795dea1b22a7277a6e62b57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cc3dbe184066c368649ca8fe43bd37f2bbd5cca17fed2de93ae91c85223a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:28:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36d432cba0831b7e260e3fcd9c2a133dd0e49595654dd600fca9d46911e0cde4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T14:29:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.494925 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.495029 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.495096 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.495134 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.495191 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:28Z","lastTransitionTime":"2025-12-02T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.500633 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3984029-7a27-4ca0-ae60-3f7ef26f2de2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T14:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ef7cb785e2ec78e4532511278a8f823ba5347390ec58407b8c00347e16f4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44a1c9cc5d60585f876733f280c8449c1094df224520033d0dea9c18cf499ac0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce7c3ec735c4e81854ce98a859b821e69b7a2f8c351acdbd5b8ad0bec0d0467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T14:28:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.517271 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.517329 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.517354 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.517383 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.517405 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:28Z","lastTransitionTime":"2025-12-02T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.524438 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad9be7fb5cd0995ed1bb9f02d7baa08b13ed77eddfed73e7b8b2d3652b5641c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T14:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: E1202 14:30:28.540002 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.544085 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.547991 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.548018 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.548030 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.548047 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.548062 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:28Z","lastTransitionTime":"2025-12-02T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:28 crc kubenswrapper[4814]: E1202 14:30:28.562484 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.564494 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T14:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.567762 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.567797 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.567806 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.567822 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.567833 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:28Z","lastTransitionTime":"2025-12-02T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:28 crc kubenswrapper[4814]: E1202 14:30:28.582260 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.588245 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.588304 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.588320 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.588340 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.588357 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:28Z","lastTransitionTime":"2025-12-02T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:28 crc kubenswrapper[4814]: E1202 14:30:28.607628 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.617469 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.617523 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.617537 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.617561 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.617577 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:28Z","lastTransitionTime":"2025-12-02T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:28 crc kubenswrapper[4814]: E1202 14:30:28.631437 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T14:30:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"81575f95-69c9-44e7-9148-cb51ec732398\\\",\\\"systemUUID\\\":\\\"be1679fa-26d7-434d-a9d1-fc57de342feb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T14:30:28Z is after 2025-08-24T17:21:41Z" Dec 02 14:30:28 crc kubenswrapper[4814]: E1202 14:30:28.631566 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.633583 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.633611 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.633622 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.633638 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.633650 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:28Z","lastTransitionTime":"2025-12-02T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.736344 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.736401 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.736411 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.736429 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.736442 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:28Z","lastTransitionTime":"2025-12-02T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.839417 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.839466 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.839477 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.839495 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.839507 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:28Z","lastTransitionTime":"2025-12-02T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.942135 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.942170 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.942217 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.942263 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:28 crc kubenswrapper[4814]: I1202 14:30:28.942276 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:28Z","lastTransitionTime":"2025-12-02T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.044937 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.044999 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.045014 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.045041 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.045057 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:29Z","lastTransitionTime":"2025-12-02T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.149014 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.149075 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.149092 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.149113 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.149127 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:29Z","lastTransitionTime":"2025-12-02T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.252178 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.252224 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.252253 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.252272 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.252283 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:29Z","lastTransitionTime":"2025-12-02T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.355338 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.355441 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.355458 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.355483 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.355500 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:29Z","lastTransitionTime":"2025-12-02T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.458496 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.458547 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.458557 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.458575 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.458588 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:29Z","lastTransitionTime":"2025-12-02T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.561725 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.561778 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.561788 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.561812 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.561824 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:29Z","lastTransitionTime":"2025-12-02T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.665794 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.665845 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.665857 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.665882 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.665895 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:29Z","lastTransitionTime":"2025-12-02T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.768709 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.768758 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.768770 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.768786 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.768798 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:29Z","lastTransitionTime":"2025-12-02T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.872015 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.872054 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.872065 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.872079 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.872090 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:29Z","lastTransitionTime":"2025-12-02T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.975554 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.975699 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.975780 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.975810 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:29 crc kubenswrapper[4814]: I1202 14:30:29.975832 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:29Z","lastTransitionTime":"2025-12-02T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.079595 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.079670 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.079685 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.079709 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.079730 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:30Z","lastTransitionTime":"2025-12-02T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.143768 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.143845 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.143890 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.143968 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:30 crc kubenswrapper[4814]: E1202 14:30:30.144265 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:30 crc kubenswrapper[4814]: E1202 14:30:30.144431 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:30 crc kubenswrapper[4814]: E1202 14:30:30.144652 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:30 crc kubenswrapper[4814]: E1202 14:30:30.144775 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.183457 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.183546 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.183558 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.183585 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.183599 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:30Z","lastTransitionTime":"2025-12-02T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.286611 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.286658 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.286667 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.286685 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.286697 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:30Z","lastTransitionTime":"2025-12-02T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.389044 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.389097 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.389109 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.389123 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.389133 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:30Z","lastTransitionTime":"2025-12-02T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.492421 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.492487 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.492503 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.492529 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.492544 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:30Z","lastTransitionTime":"2025-12-02T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.595455 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.595558 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.595588 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.595631 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.595690 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:30Z","lastTransitionTime":"2025-12-02T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.698388 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.698467 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.698490 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.698523 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.698545 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:30Z","lastTransitionTime":"2025-12-02T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.802165 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.802261 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.802345 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.802378 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.802401 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:30Z","lastTransitionTime":"2025-12-02T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.906553 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.906697 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.906722 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.906785 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:30 crc kubenswrapper[4814]: I1202 14:30:30.906806 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:30Z","lastTransitionTime":"2025-12-02T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.010438 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.010528 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.010543 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.010568 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.010583 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:31Z","lastTransitionTime":"2025-12-02T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.113008 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.113066 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.113078 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.113096 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.113107 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:31Z","lastTransitionTime":"2025-12-02T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.216064 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.216140 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.216160 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.216186 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.216208 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:31Z","lastTransitionTime":"2025-12-02T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.318972 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.319016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.319028 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.319051 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.319063 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:31Z","lastTransitionTime":"2025-12-02T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.422606 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.422681 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.422694 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.422727 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.422740 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:31Z","lastTransitionTime":"2025-12-02T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.525705 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.525761 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.525772 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.525795 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.525808 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:31Z","lastTransitionTime":"2025-12-02T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.629121 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.629178 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.629188 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.629206 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.629218 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:31Z","lastTransitionTime":"2025-12-02T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.731619 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.731694 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.731716 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.731748 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.731772 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:31Z","lastTransitionTime":"2025-12-02T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.835114 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.835193 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.835212 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.835277 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.835297 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:31Z","lastTransitionTime":"2025-12-02T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.937954 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.937987 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.937997 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.938012 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:31 crc kubenswrapper[4814]: I1202 14:30:31.938021 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:31Z","lastTransitionTime":"2025-12-02T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.041461 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.041557 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.041578 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.041607 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.041624 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:32Z","lastTransitionTime":"2025-12-02T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.143755 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.143757 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.143784 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:32 crc kubenswrapper[4814]: E1202 14:30:32.144022 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:32 crc kubenswrapper[4814]: E1202 14:30:32.144165 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.144281 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:32 crc kubenswrapper[4814]: E1202 14:30:32.144355 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:32 crc kubenswrapper[4814]: E1202 14:30:32.144447 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.144835 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.144890 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.144908 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.144932 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.144951 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:32Z","lastTransitionTime":"2025-12-02T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.249327 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.249387 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.249404 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.249427 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.249445 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:32Z","lastTransitionTime":"2025-12-02T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.352959 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.353026 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.353044 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.353071 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.353089 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:32Z","lastTransitionTime":"2025-12-02T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.456874 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.456933 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.456950 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.456984 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.457003 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:32Z","lastTransitionTime":"2025-12-02T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.560698 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.560777 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.560799 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.560835 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.560853 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:32Z","lastTransitionTime":"2025-12-02T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.664613 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.664682 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.664700 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.664726 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.664737 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:32Z","lastTransitionTime":"2025-12-02T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.768322 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.768412 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.768438 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.768474 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.768501 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:32Z","lastTransitionTime":"2025-12-02T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.871405 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.871474 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.871492 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.871522 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.871542 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:32Z","lastTransitionTime":"2025-12-02T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.975044 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.975125 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.975144 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.975175 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:32 crc kubenswrapper[4814]: I1202 14:30:32.975199 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:32Z","lastTransitionTime":"2025-12-02T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.077943 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.078015 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.078034 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.078062 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.078086 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:33Z","lastTransitionTime":"2025-12-02T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.182644 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.182697 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.182718 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.182750 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.182768 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:33Z","lastTransitionTime":"2025-12-02T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.285463 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.285543 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.285569 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.285608 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.285636 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:33Z","lastTransitionTime":"2025-12-02T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.388474 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.388527 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.388543 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.388565 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.388582 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:33Z","lastTransitionTime":"2025-12-02T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.491948 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.491994 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.492005 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.492022 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.492032 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:33Z","lastTransitionTime":"2025-12-02T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.595971 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.596033 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.596044 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.596067 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.596082 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:33Z","lastTransitionTime":"2025-12-02T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.698760 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.698823 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.698835 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.698855 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.698868 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:33Z","lastTransitionTime":"2025-12-02T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.801308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.801619 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.801638 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.801666 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.801685 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:33Z","lastTransitionTime":"2025-12-02T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.905155 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.905209 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.905220 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.905255 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:33 crc kubenswrapper[4814]: I1202 14:30:33.905270 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:33Z","lastTransitionTime":"2025-12-02T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.008359 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.008420 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.008433 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.008453 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.008467 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:34Z","lastTransitionTime":"2025-12-02T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.110949 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.111027 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.111052 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.111086 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.111107 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:34Z","lastTransitionTime":"2025-12-02T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.144523 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.144606 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.144686 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.144865 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:34 crc kubenswrapper[4814]: E1202 14:30:34.144849 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:34 crc kubenswrapper[4814]: E1202 14:30:34.145060 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:34 crc kubenswrapper[4814]: E1202 14:30:34.145180 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:34 crc kubenswrapper[4814]: E1202 14:30:34.145458 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.214352 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.214442 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.214467 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.214494 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.214511 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:34Z","lastTransitionTime":"2025-12-02T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.317878 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.317933 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.317948 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.317969 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.317984 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:34Z","lastTransitionTime":"2025-12-02T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.422062 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.422122 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.422136 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.422160 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.422177 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:34Z","lastTransitionTime":"2025-12-02T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.525558 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.525616 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.525633 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.525656 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.525671 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:34Z","lastTransitionTime":"2025-12-02T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.629579 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.629629 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.629639 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.629660 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.629673 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:34Z","lastTransitionTime":"2025-12-02T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.732718 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.732791 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.732806 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.732833 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.732874 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:34Z","lastTransitionTime":"2025-12-02T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.836613 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.836731 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.836768 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.836810 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.836834 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:34Z","lastTransitionTime":"2025-12-02T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.940649 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.940729 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.940749 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.940780 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:34 crc kubenswrapper[4814]: I1202 14:30:34.940802 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:34Z","lastTransitionTime":"2025-12-02T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.044514 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.044579 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.044593 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.044622 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.044641 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:35Z","lastTransitionTime":"2025-12-02T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.148949 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.149031 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.149191 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.149271 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.149301 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:35Z","lastTransitionTime":"2025-12-02T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.252438 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.252494 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.252507 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.252526 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.252541 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:35Z","lastTransitionTime":"2025-12-02T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.355559 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.355620 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.355642 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.355664 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.355684 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:35Z","lastTransitionTime":"2025-12-02T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.458351 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.458425 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.458463 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.458499 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.458523 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:35Z","lastTransitionTime":"2025-12-02T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.561765 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.561819 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.561862 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.561883 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.561896 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:35Z","lastTransitionTime":"2025-12-02T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.664920 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.665007 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.665033 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.665067 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.665098 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:35Z","lastTransitionTime":"2025-12-02T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.768387 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.768444 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.768465 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.768489 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.768504 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:35Z","lastTransitionTime":"2025-12-02T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.871720 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.871795 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.871814 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.871842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.871866 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:35Z","lastTransitionTime":"2025-12-02T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.975495 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.975568 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.975586 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.975616 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:35 crc kubenswrapper[4814]: I1202 14:30:35.975635 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:35Z","lastTransitionTime":"2025-12-02T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.078696 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.078806 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.078844 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.078884 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.078914 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:36Z","lastTransitionTime":"2025-12-02T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.144546 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:36 crc kubenswrapper[4814]: E1202 14:30:36.144797 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.144888 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.144939 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.144888 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:36 crc kubenswrapper[4814]: E1202 14:30:36.145381 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:36 crc kubenswrapper[4814]: E1202 14:30:36.145541 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:36 crc kubenswrapper[4814]: E1202 14:30:36.145671 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.147601 4814 scope.go:117] "RemoveContainer" containerID="695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab" Dec 02 14:30:36 crc kubenswrapper[4814]: E1202 14:30:36.147953 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.182151 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.182271 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.182297 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.182330 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.182349 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:36Z","lastTransitionTime":"2025-12-02T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.286128 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.286184 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.286200 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.286226 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.286278 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:36Z","lastTransitionTime":"2025-12-02T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.389951 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.390017 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.390039 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.390067 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.390092 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:36Z","lastTransitionTime":"2025-12-02T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.496871 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.496930 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.496942 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.496961 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.496974 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:36Z","lastTransitionTime":"2025-12-02T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.600822 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.600898 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.600933 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.600962 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.600984 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:36Z","lastTransitionTime":"2025-12-02T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.704506 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.704584 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.704602 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.704636 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.704656 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:36Z","lastTransitionTime":"2025-12-02T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.808012 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.808081 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.808093 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.808118 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.808133 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:36Z","lastTransitionTime":"2025-12-02T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.912063 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.912161 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.912192 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.912224 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:36 crc kubenswrapper[4814]: I1202 14:30:36.912283 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:36Z","lastTransitionTime":"2025-12-02T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.016825 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.016923 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.016943 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.016975 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.016995 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:37Z","lastTransitionTime":"2025-12-02T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.120921 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.120987 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.121002 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.121025 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.121045 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:37Z","lastTransitionTime":"2025-12-02T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.225258 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.225324 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.225342 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.225371 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.225387 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:37Z","lastTransitionTime":"2025-12-02T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.328878 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.328953 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.328972 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.328999 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.329018 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:37Z","lastTransitionTime":"2025-12-02T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.432767 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.432834 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.432879 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.433092 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.433132 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:37Z","lastTransitionTime":"2025-12-02T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.536577 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.536666 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.536690 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.536719 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.536743 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:37Z","lastTransitionTime":"2025-12-02T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.640631 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.640686 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.640739 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.640762 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.640778 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:37Z","lastTransitionTime":"2025-12-02T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.743185 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.743219 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.743247 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.743265 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.743277 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:37Z","lastTransitionTime":"2025-12-02T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.847590 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.847645 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.847661 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.847681 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.847695 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:37Z","lastTransitionTime":"2025-12-02T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.950816 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.950993 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.951016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.951080 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:37 crc kubenswrapper[4814]: I1202 14:30:37.951099 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:37Z","lastTransitionTime":"2025-12-02T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.054529 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.054622 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.054655 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.054686 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.054709 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:38Z","lastTransitionTime":"2025-12-02T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.144275 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.144328 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.144416 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.144505 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:38 crc kubenswrapper[4814]: E1202 14:30:38.144503 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:38 crc kubenswrapper[4814]: E1202 14:30:38.144640 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:38 crc kubenswrapper[4814]: E1202 14:30:38.144741 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:38 crc kubenswrapper[4814]: E1202 14:30:38.144922 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.161642 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.161704 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.161720 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.161750 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.161767 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:38Z","lastTransitionTime":"2025-12-02T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.217214 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhbnz" podStartSLOduration=76.217178728 podStartE2EDuration="1m16.217178728s" podCreationTimestamp="2025-12-02 14:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:30:38.216810159 +0000 UTC m=+100.341172403" watchObservedRunningTime="2025-12-02 14:30:38.217178728 +0000 UTC m=+100.341540982" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.249951 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=82.249927891 podStartE2EDuration="1m22.249927891s" podCreationTimestamp="2025-12-02 14:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:30:38.247796437 +0000 UTC m=+100.372158661" watchObservedRunningTime="2025-12-02 14:30:38.249927891 +0000 UTC m=+100.374290115" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.263712 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.263758 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.263771 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.263789 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.263802 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:38Z","lastTransitionTime":"2025-12-02T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.268669 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.268656292 podStartE2EDuration="1m22.268656292s" podCreationTimestamp="2025-12-02 14:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:30:38.268488057 +0000 UTC m=+100.392850271" watchObservedRunningTime="2025-12-02 14:30:38.268656292 +0000 UTC m=+100.393018506" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.310049 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.310023851 podStartE2EDuration="48.310023851s" podCreationTimestamp="2025-12-02 14:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:30:38.309516248 +0000 UTC m=+100.433878452" watchObservedRunningTime="2025-12-02 14:30:38.310023851 +0000 UTC m=+100.434386055" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.366921 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.366958 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.366968 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.366983 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.366993 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:38Z","lastTransitionTime":"2025-12-02T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.383914 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-k6qjb" podStartSLOduration=77.383881807 podStartE2EDuration="1m17.383881807s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:30:38.368370397 +0000 UTC m=+100.492732601" watchObservedRunningTime="2025-12-02 14:30:38.383881807 +0000 UTC m=+100.508244011" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.384355 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podStartSLOduration=77.384347909 podStartE2EDuration="1m17.384347909s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:30:38.383756444 +0000 UTC m=+100.508118658" watchObservedRunningTime="2025-12-02 14:30:38.384347909 +0000 UTC m=+100.508710113" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.397655 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bg695" podStartSLOduration=77.397630503 podStartE2EDuration="1m17.397630503s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:30:38.397398417 +0000 UTC m=+100.521760621" watchObservedRunningTime="2025-12-02 14:30:38.397630503 +0000 UTC m=+100.521992707" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.412339 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=22.412312162 podStartE2EDuration="22.412312162s" podCreationTimestamp="2025-12-02 14:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:30:38.411920262 +0000 UTC m=+100.536282466" watchObservedRunningTime="2025-12-02 14:30:38.412312162 +0000 UTC m=+100.536674376" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.460648 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-cd6tj" podStartSLOduration=77.460623496 podStartE2EDuration="1m17.460623496s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:30:38.446597373 +0000 UTC m=+100.570959587" watchObservedRunningTime="2025-12-02 14:30:38.460623496 +0000 UTC m=+100.584985700" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.470108 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.470147 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.470156 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.470172 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.470185 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:38Z","lastTransitionTime":"2025-12-02T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.475857 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=82.475844238 podStartE2EDuration="1m22.475844238s" podCreationTimestamp="2025-12-02 14:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:30:38.475659864 +0000 UTC m=+100.600022058" watchObservedRunningTime="2025-12-02 14:30:38.475844238 +0000 UTC m=+100.600206442" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.490664 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bfz4v" podStartSLOduration=77.49064273 podStartE2EDuration="1m17.49064273s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:30:38.490410624 +0000 UTC m=+100.614772828" watchObservedRunningTime="2025-12-02 14:30:38.49064273 +0000 UTC m=+100.615004934" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.573414 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.573493 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.573512 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.573540 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.573560 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:38Z","lastTransitionTime":"2025-12-02T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.676597 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.676679 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.676697 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.676728 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.676749 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:38Z","lastTransitionTime":"2025-12-02T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.780527 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.780605 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.780632 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.780669 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.780696 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:38Z","lastTransitionTime":"2025-12-02T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.871383 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.871461 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.871480 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.871508 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.871527 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T14:30:38Z","lastTransitionTime":"2025-12-02T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.928740 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld"] Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.929447 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.935131 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.936375 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.936760 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 02 14:30:38 crc kubenswrapper[4814]: I1202 14:30:38.937006 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.007436 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.007745 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.007935 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.008052 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.008204 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.109632 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.109686 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.109707 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.109771 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.110031 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.110183 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.110284 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.112961 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.121904 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.129860 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f32f4e5-b394-4f94-8b8a-97005ddd88a1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hm5ld\" (UID: \"4f32f4e5-b394-4f94-8b8a-97005ddd88a1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.245365 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" Dec 02 14:30:39 crc kubenswrapper[4814]: W1202 14:30:39.269958 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f32f4e5_b394_4f94_8b8a_97005ddd88a1.slice/crio-2ec36982fa8b5bb770ead69c7df197f537144e2f5bec4c6871a9d67c4c57fc65 WatchSource:0}: Error finding container 2ec36982fa8b5bb770ead69c7df197f537144e2f5bec4c6871a9d67c4c57fc65: Status 404 returned error can't find the container with id 2ec36982fa8b5bb770ead69c7df197f537144e2f5bec4c6871a9d67c4c57fc65 Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.764776 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" event={"ID":"4f32f4e5-b394-4f94-8b8a-97005ddd88a1","Type":"ContainerStarted","Data":"54583df5b1707d223c73f7446011e03fd6de667f71431929b3581795ce246fae"} Dec 02 14:30:39 crc kubenswrapper[4814]: I1202 14:30:39.764862 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" event={"ID":"4f32f4e5-b394-4f94-8b8a-97005ddd88a1","Type":"ContainerStarted","Data":"2ec36982fa8b5bb770ead69c7df197f537144e2f5bec4c6871a9d67c4c57fc65"} Dec 02 14:30:40 crc kubenswrapper[4814]: I1202 14:30:40.121390 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs\") pod \"network-metrics-daemon-5n4wm\" (UID: \"2ad0a031-3e26-4596-a7a9-cd00a95b7650\") " pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:40 crc kubenswrapper[4814]: E1202 14:30:40.121583 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:30:40 crc kubenswrapper[4814]: E1202 14:30:40.121665 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs podName:2ad0a031-3e26-4596-a7a9-cd00a95b7650 nodeName:}" failed. No retries permitted until 2025-12-02 14:31:44.121643088 +0000 UTC m=+166.246005322 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs") pod "network-metrics-daemon-5n4wm" (UID: "2ad0a031-3e26-4596-a7a9-cd00a95b7650") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 14:30:40 crc kubenswrapper[4814]: I1202 14:30:40.144503 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:40 crc kubenswrapper[4814]: I1202 14:30:40.144593 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:40 crc kubenswrapper[4814]: I1202 14:30:40.144656 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:40 crc kubenswrapper[4814]: I1202 14:30:40.144624 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:40 crc kubenswrapper[4814]: E1202 14:30:40.144767 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:40 crc kubenswrapper[4814]: E1202 14:30:40.144906 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:40 crc kubenswrapper[4814]: E1202 14:30:40.145120 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:40 crc kubenswrapper[4814]: E1202 14:30:40.145212 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:42 crc kubenswrapper[4814]: I1202 14:30:42.144096 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:42 crc kubenswrapper[4814]: I1202 14:30:42.144138 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:42 crc kubenswrapper[4814]: I1202 14:30:42.144154 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:42 crc kubenswrapper[4814]: I1202 14:30:42.144456 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:42 crc kubenswrapper[4814]: E1202 14:30:42.144706 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:42 crc kubenswrapper[4814]: E1202 14:30:42.144762 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:42 crc kubenswrapper[4814]: E1202 14:30:42.144946 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:42 crc kubenswrapper[4814]: E1202 14:30:42.145033 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:44 crc kubenswrapper[4814]: I1202 14:30:44.144864 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:44 crc kubenswrapper[4814]: I1202 14:30:44.144902 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:44 crc kubenswrapper[4814]: E1202 14:30:44.145013 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:44 crc kubenswrapper[4814]: I1202 14:30:44.145051 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:44 crc kubenswrapper[4814]: I1202 14:30:44.145109 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:44 crc kubenswrapper[4814]: E1202 14:30:44.145211 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:44 crc kubenswrapper[4814]: E1202 14:30:44.145850 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:44 crc kubenswrapper[4814]: E1202 14:30:44.146072 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:46 crc kubenswrapper[4814]: I1202 14:30:46.143815 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:46 crc kubenswrapper[4814]: I1202 14:30:46.143912 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:46 crc kubenswrapper[4814]: I1202 14:30:46.143938 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:46 crc kubenswrapper[4814]: I1202 14:30:46.143858 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:46 crc kubenswrapper[4814]: E1202 14:30:46.144096 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:46 crc kubenswrapper[4814]: E1202 14:30:46.144274 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:46 crc kubenswrapper[4814]: E1202 14:30:46.144472 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:46 crc kubenswrapper[4814]: E1202 14:30:46.144611 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:48 crc kubenswrapper[4814]: I1202 14:30:48.144395 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:48 crc kubenswrapper[4814]: I1202 14:30:48.144483 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:48 crc kubenswrapper[4814]: I1202 14:30:48.144422 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:48 crc kubenswrapper[4814]: I1202 14:30:48.144413 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:48 crc kubenswrapper[4814]: E1202 14:30:48.145652 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:48 crc kubenswrapper[4814]: E1202 14:30:48.145753 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:48 crc kubenswrapper[4814]: E1202 14:30:48.146206 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:48 crc kubenswrapper[4814]: E1202 14:30:48.146327 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:48 crc kubenswrapper[4814]: I1202 14:30:48.146564 4814 scope.go:117] "RemoveContainer" containerID="695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab" Dec 02 14:30:48 crc kubenswrapper[4814]: E1202 14:30:48.146738 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" Dec 02 14:30:50 crc kubenswrapper[4814]: I1202 14:30:50.143824 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:50 crc kubenswrapper[4814]: I1202 14:30:50.143908 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:50 crc kubenswrapper[4814]: I1202 14:30:50.143844 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:50 crc kubenswrapper[4814]: I1202 14:30:50.143919 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:50 crc kubenswrapper[4814]: E1202 14:30:50.144009 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:50 crc kubenswrapper[4814]: E1202 14:30:50.144143 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:50 crc kubenswrapper[4814]: E1202 14:30:50.144266 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:50 crc kubenswrapper[4814]: E1202 14:30:50.144346 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:52 crc kubenswrapper[4814]: I1202 14:30:52.143912 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:52 crc kubenswrapper[4814]: I1202 14:30:52.143968 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:52 crc kubenswrapper[4814]: I1202 14:30:52.143913 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:52 crc kubenswrapper[4814]: E1202 14:30:52.144147 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:52 crc kubenswrapper[4814]: I1202 14:30:52.143933 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:52 crc kubenswrapper[4814]: E1202 14:30:52.144070 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:52 crc kubenswrapper[4814]: E1202 14:30:52.144495 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:52 crc kubenswrapper[4814]: E1202 14:30:52.144568 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:54 crc kubenswrapper[4814]: I1202 14:30:54.144102 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:54 crc kubenswrapper[4814]: I1202 14:30:54.144452 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:54 crc kubenswrapper[4814]: E1202 14:30:54.144871 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:54 crc kubenswrapper[4814]: I1202 14:30:54.144598 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:54 crc kubenswrapper[4814]: E1202 14:30:54.145064 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:54 crc kubenswrapper[4814]: I1202 14:30:54.144509 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:54 crc kubenswrapper[4814]: E1202 14:30:54.145361 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:54 crc kubenswrapper[4814]: E1202 14:30:54.145562 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:54 crc kubenswrapper[4814]: I1202 14:30:54.825362 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd6tj_733ab60d-6038-4a5b-9120-9c4ef1f47774/kube-multus/1.log" Dec 02 14:30:54 crc kubenswrapper[4814]: I1202 14:30:54.826238 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd6tj_733ab60d-6038-4a5b-9120-9c4ef1f47774/kube-multus/0.log" Dec 02 14:30:54 crc kubenswrapper[4814]: I1202 14:30:54.826310 4814 generic.go:334] "Generic (PLEG): container finished" podID="733ab60d-6038-4a5b-9120-9c4ef1f47774" containerID="856a317b4636fc96e9833f76c75051122ac8cde18987182481a998ea6c5824db" exitCode=1 Dec 02 14:30:54 crc kubenswrapper[4814]: I1202 14:30:54.826358 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd6tj" event={"ID":"733ab60d-6038-4a5b-9120-9c4ef1f47774","Type":"ContainerDied","Data":"856a317b4636fc96e9833f76c75051122ac8cde18987182481a998ea6c5824db"} Dec 02 14:30:54 crc kubenswrapper[4814]: I1202 14:30:54.826408 4814 scope.go:117] "RemoveContainer" containerID="b64d4a6e4acda643c3367925401ac20f454b1ffd0fb38094afaa92479926af1e" Dec 02 14:30:54 crc kubenswrapper[4814]: I1202 14:30:54.827048 4814 scope.go:117] "RemoveContainer" containerID="856a317b4636fc96e9833f76c75051122ac8cde18987182481a998ea6c5824db" Dec 02 14:30:54 crc kubenswrapper[4814]: E1202 14:30:54.827286 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-cd6tj_openshift-multus(733ab60d-6038-4a5b-9120-9c4ef1f47774)\"" pod="openshift-multus/multus-cd6tj" podUID="733ab60d-6038-4a5b-9120-9c4ef1f47774" Dec 02 14:30:54 crc kubenswrapper[4814]: I1202 14:30:54.857774 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hm5ld" podStartSLOduration=93.85774790400001 podStartE2EDuration="1m33.857747904s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:30:39.783458989 +0000 UTC m=+101.907821203" watchObservedRunningTime="2025-12-02 14:30:54.857747904 +0000 UTC m=+116.982110148" Dec 02 14:30:55 crc kubenswrapper[4814]: I1202 14:30:55.831642 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd6tj_733ab60d-6038-4a5b-9120-9c4ef1f47774/kube-multus/1.log" Dec 02 14:30:56 crc kubenswrapper[4814]: I1202 14:30:56.144716 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:56 crc kubenswrapper[4814]: E1202 14:30:56.144897 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:56 crc kubenswrapper[4814]: I1202 14:30:56.144965 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:56 crc kubenswrapper[4814]: I1202 14:30:56.145065 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:56 crc kubenswrapper[4814]: I1202 14:30:56.145225 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:56 crc kubenswrapper[4814]: E1202 14:30:56.145214 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:56 crc kubenswrapper[4814]: E1202 14:30:56.145466 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:56 crc kubenswrapper[4814]: E1202 14:30:56.145583 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:58 crc kubenswrapper[4814]: I1202 14:30:58.144147 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:30:58 crc kubenswrapper[4814]: E1202 14:30:58.145626 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:30:58 crc kubenswrapper[4814]: I1202 14:30:58.145863 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:30:58 crc kubenswrapper[4814]: I1202 14:30:58.145906 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:30:58 crc kubenswrapper[4814]: E1202 14:30:58.145986 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:30:58 crc kubenswrapper[4814]: I1202 14:30:58.146060 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:30:58 crc kubenswrapper[4814]: E1202 14:30:58.146138 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:30:58 crc kubenswrapper[4814]: E1202 14:30:58.146493 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:30:58 crc kubenswrapper[4814]: E1202 14:30:58.165627 4814 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 02 14:30:58 crc kubenswrapper[4814]: E1202 14:30:58.305045 4814 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:31:00 crc kubenswrapper[4814]: I1202 14:31:00.143755 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:00 crc kubenswrapper[4814]: I1202 14:31:00.144212 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:31:00 crc kubenswrapper[4814]: I1202 14:31:00.144271 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:00 crc kubenswrapper[4814]: E1202 14:31:00.144264 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:31:00 crc kubenswrapper[4814]: E1202 14:31:00.144328 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:31:00 crc kubenswrapper[4814]: I1202 14:31:00.144018 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:31:00 crc kubenswrapper[4814]: E1202 14:31:00.144444 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:31:00 crc kubenswrapper[4814]: E1202 14:31:00.144560 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:31:01 crc kubenswrapper[4814]: I1202 14:31:01.145819 4814 scope.go:117] "RemoveContainer" containerID="695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab" Dec 02 14:31:01 crc kubenswrapper[4814]: E1202 14:31:01.146051 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gnpv5_openshift-ovn-kubernetes(d5cd5566-9b36-421b-8f02-f3d12db9b829)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" Dec 02 14:31:02 crc kubenswrapper[4814]: I1202 14:31:02.143921 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:02 crc kubenswrapper[4814]: I1202 14:31:02.144000 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:31:02 crc kubenswrapper[4814]: I1202 14:31:02.144005 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:02 crc kubenswrapper[4814]: E1202 14:31:02.144161 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:31:02 crc kubenswrapper[4814]: I1202 14:31:02.144189 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:31:02 crc kubenswrapper[4814]: E1202 14:31:02.144337 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:31:02 crc kubenswrapper[4814]: E1202 14:31:02.144485 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:31:02 crc kubenswrapper[4814]: E1202 14:31:02.144577 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:31:03 crc kubenswrapper[4814]: E1202 14:31:03.306371 4814 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:31:04 crc kubenswrapper[4814]: I1202 14:31:04.143998 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:04 crc kubenswrapper[4814]: I1202 14:31:04.144119 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:31:04 crc kubenswrapper[4814]: I1202 14:31:04.144025 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:04 crc kubenswrapper[4814]: E1202 14:31:04.144303 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:31:04 crc kubenswrapper[4814]: I1202 14:31:04.144326 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:31:04 crc kubenswrapper[4814]: E1202 14:31:04.144434 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:31:04 crc kubenswrapper[4814]: E1202 14:31:04.144554 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:31:04 crc kubenswrapper[4814]: E1202 14:31:04.144652 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:31:06 crc kubenswrapper[4814]: I1202 14:31:06.144543 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:06 crc kubenswrapper[4814]: I1202 14:31:06.144582 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:31:06 crc kubenswrapper[4814]: I1202 14:31:06.144567 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:06 crc kubenswrapper[4814]: E1202 14:31:06.144742 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:31:06 crc kubenswrapper[4814]: E1202 14:31:06.144814 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:31:06 crc kubenswrapper[4814]: I1202 14:31:06.144863 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:31:06 crc kubenswrapper[4814]: E1202 14:31:06.145016 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:31:06 crc kubenswrapper[4814]: E1202 14:31:06.145097 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:31:08 crc kubenswrapper[4814]: I1202 14:31:08.144215 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:31:08 crc kubenswrapper[4814]: I1202 14:31:08.144263 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:08 crc kubenswrapper[4814]: I1202 14:31:08.145603 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:31:08 crc kubenswrapper[4814]: I1202 14:31:08.145627 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:08 crc kubenswrapper[4814]: I1202 14:31:08.145715 4814 scope.go:117] "RemoveContainer" containerID="856a317b4636fc96e9833f76c75051122ac8cde18987182481a998ea6c5824db" Dec 02 14:31:08 crc kubenswrapper[4814]: E1202 14:31:08.145848 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:31:08 crc kubenswrapper[4814]: E1202 14:31:08.145961 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:31:08 crc kubenswrapper[4814]: E1202 14:31:08.146171 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:31:08 crc kubenswrapper[4814]: E1202 14:31:08.146409 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:31:08 crc kubenswrapper[4814]: E1202 14:31:08.308149 4814 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:31:08 crc kubenswrapper[4814]: I1202 14:31:08.882033 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd6tj_733ab60d-6038-4a5b-9120-9c4ef1f47774/kube-multus/1.log" Dec 02 14:31:08 crc kubenswrapper[4814]: I1202 14:31:08.882169 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd6tj" event={"ID":"733ab60d-6038-4a5b-9120-9c4ef1f47774","Type":"ContainerStarted","Data":"848e89bc2046be5579f5aafbad4aee9b0f7d56fa9305f8e6ccd290ab3578b5ef"} Dec 02 14:31:10 crc kubenswrapper[4814]: I1202 14:31:10.143735 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:10 crc kubenswrapper[4814]: I1202 14:31:10.143796 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:31:10 crc kubenswrapper[4814]: I1202 14:31:10.143918 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:10 crc kubenswrapper[4814]: I1202 14:31:10.144094 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:31:10 crc kubenswrapper[4814]: E1202 14:31:10.144085 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:31:10 crc kubenswrapper[4814]: E1202 14:31:10.144221 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:31:10 crc kubenswrapper[4814]: E1202 14:31:10.144438 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:31:10 crc kubenswrapper[4814]: E1202 14:31:10.144526 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:31:12 crc kubenswrapper[4814]: I1202 14:31:12.143750 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:12 crc kubenswrapper[4814]: I1202 14:31:12.143823 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:31:12 crc kubenswrapper[4814]: I1202 14:31:12.143775 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:31:12 crc kubenswrapper[4814]: E1202 14:31:12.143981 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:31:12 crc kubenswrapper[4814]: I1202 14:31:12.143751 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:12 crc kubenswrapper[4814]: E1202 14:31:12.144135 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:31:12 crc kubenswrapper[4814]: E1202 14:31:12.144226 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:31:12 crc kubenswrapper[4814]: I1202 14:31:12.145740 4814 scope.go:117] "RemoveContainer" containerID="695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab" Dec 02 14:31:12 crc kubenswrapper[4814]: E1202 14:31:12.145762 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:31:12 crc kubenswrapper[4814]: I1202 14:31:12.898981 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/3.log" Dec 02 14:31:12 crc kubenswrapper[4814]: I1202 14:31:12.901924 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerStarted","Data":"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f"} Dec 02 14:31:12 crc kubenswrapper[4814]: I1202 14:31:12.902524 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:31:12 crc kubenswrapper[4814]: I1202 14:31:12.935312 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podStartSLOduration=111.935287344 podStartE2EDuration="1m51.935287344s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:12.934255608 +0000 UTC m=+135.058617832" watchObservedRunningTime="2025-12-02 14:31:12.935287344 +0000 UTC m=+135.059649548" Dec 02 14:31:13 crc kubenswrapper[4814]: I1202 14:31:13.050280 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5n4wm"] Dec 02 14:31:13 crc kubenswrapper[4814]: I1202 14:31:13.050463 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:31:13 crc kubenswrapper[4814]: E1202 14:31:13.050586 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:31:13 crc kubenswrapper[4814]: E1202 14:31:13.310657 4814 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:31:14 crc kubenswrapper[4814]: I1202 14:31:14.144049 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:14 crc kubenswrapper[4814]: I1202 14:31:14.144044 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:31:14 crc kubenswrapper[4814]: I1202 14:31:14.144361 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:14 crc kubenswrapper[4814]: E1202 14:31:14.144470 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:31:14 crc kubenswrapper[4814]: E1202 14:31:14.144182 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:31:14 crc kubenswrapper[4814]: E1202 14:31:14.144621 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:31:15 crc kubenswrapper[4814]: I1202 14:31:15.144119 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:31:15 crc kubenswrapper[4814]: E1202 14:31:15.144294 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:31:16 crc kubenswrapper[4814]: I1202 14:31:16.143607 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:31:16 crc kubenswrapper[4814]: I1202 14:31:16.143712 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:16 crc kubenswrapper[4814]: E1202 14:31:16.143859 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:31:16 crc kubenswrapper[4814]: E1202 14:31:16.144041 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:31:16 crc kubenswrapper[4814]: I1202 14:31:16.144318 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:16 crc kubenswrapper[4814]: E1202 14:31:16.145026 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:31:17 crc kubenswrapper[4814]: I1202 14:31:17.144159 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:31:17 crc kubenswrapper[4814]: E1202 14:31:17.144574 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5n4wm" podUID="2ad0a031-3e26-4596-a7a9-cd00a95b7650" Dec 02 14:31:18 crc kubenswrapper[4814]: I1202 14:31:18.144354 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:18 crc kubenswrapper[4814]: I1202 14:31:18.144407 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:31:18 crc kubenswrapper[4814]: I1202 14:31:18.144415 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:18 crc kubenswrapper[4814]: E1202 14:31:18.145333 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 14:31:18 crc kubenswrapper[4814]: E1202 14:31:18.145924 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 14:31:18 crc kubenswrapper[4814]: E1202 14:31:18.146388 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 14:31:19 crc kubenswrapper[4814]: I1202 14:31:19.144586 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:31:19 crc kubenswrapper[4814]: I1202 14:31:19.147325 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 02 14:31:19 crc kubenswrapper[4814]: I1202 14:31:19.147352 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.088415 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.121754 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c2xpx"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.122327 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.126310 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.127095 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.128154 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.128651 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.128952 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.129459 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.129559 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.130100 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.130103 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.130481 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.160643 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.160965 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.161062 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.161528 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.162020 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.164199 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.167973 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.168254 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.168468 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.168688 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.169051 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.169253 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.169291 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.169458 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.169931 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.173501 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5vc2w"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.174219 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hjjzp"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.174529 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fxs2g"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.175283 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.175827 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tn4k9"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.176158 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.176177 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.176267 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-fxs2g" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.176388 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tn4k9" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.177315 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.177624 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.177967 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.178302 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.178302 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.178396 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.178458 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.178315 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.179874 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.186945 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.187315 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.187317 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.187626 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.188283 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.188654 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.188838 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.188868 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.189898 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bf522e7-cf36-4600-812d-8f1c82d9ac16-config\") pod \"machine-approver-56656f9798-bmxm4\" (UID: \"0bf522e7-cf36-4600-812d-8f1c82d9ac16\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.189944 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8725a2f3-4453-49c6-a55e-2ec80acf982d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qgjwj\" (UID: \"8725a2f3-4453-49c6-a55e-2ec80acf982d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.189969 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9shvr\" (UniqueName: \"kubernetes.io/projected/e5f2f2e4-5c34-410d-8f03-1b34ac392176-kube-api-access-9shvr\") pod \"openshift-config-operator-7777fb866f-q8tkk\" (UID: \"e5f2f2e4-5c34-410d-8f03-1b34ac392176\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190011 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9xls\" (UniqueName: \"kubernetes.io/projected/be857bca-e5ac-4d2f-8f62-8ab639f08d51-kube-api-access-n9xls\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190058 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be857bca-e5ac-4d2f-8f62-8ab639f08d51-serving-cert\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190090 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f2f2e4-5c34-410d-8f03-1b34ac392176-serving-cert\") pod \"openshift-config-operator-7777fb866f-q8tkk\" (UID: \"e5f2f2e4-5c34-410d-8f03-1b34ac392176\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190125 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg6ps\" (UniqueName: \"kubernetes.io/projected/827fd14f-134f-46a7-8592-ad91bbb05510-kube-api-access-gg6ps\") pod \"machine-config-operator-74547568cd-hsxbt\" (UID: \"827fd14f-134f-46a7-8592-ad91bbb05510\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190153 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c158e403-ca5b-4c91-99b7-b530be24cd62-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tq8bc\" (UID: \"c158e403-ca5b-4c91-99b7-b530be24cd62\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190181 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190209 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190274 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190289 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190303 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190337 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190368 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e5f2f2e4-5c34-410d-8f03-1b34ac392176-available-featuregates\") pod \"openshift-config-operator-7777fb866f-q8tkk\" (UID: \"e5f2f2e4-5c34-410d-8f03-1b34ac392176\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190400 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-encryption-config\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190437 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190470 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190521 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8720fabc-937c-4b06-a3de-bb2719b71d1e-metrics-tls\") pod \"dns-operator-744455d44c-fxs2g\" (UID: \"8720fabc-937c-4b06-a3de-bb2719b71d1e\") " pod="openshift-dns-operator/dns-operator-744455d44c-fxs2g" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190552 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/79a75184-8ba4-400f-9113-d88426227339-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t7nc5\" (UID: \"79a75184-8ba4-400f-9113-d88426227339\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190586 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/588890a2-6538-4f1e-b102-24efd0c91bb8-config\") pod \"route-controller-manager-6576b87f9c-t82r6\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190617 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/588890a2-6538-4f1e-b102-24efd0c91bb8-client-ca\") pod \"route-controller-manager-6576b87f9c-t82r6\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190648 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-node-pullsecrets\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190683 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190717 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-config\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190799 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-2kv5r"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190859 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/827fd14f-134f-46a7-8592-ad91bbb05510-images\") pod \"machine-config-operator-74547568cd-hsxbt\" (UID: \"827fd14f-134f-46a7-8592-ad91bbb05510\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190897 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79a75184-8ba4-400f-9113-d88426227339-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t7nc5\" (UID: \"79a75184-8ba4-400f-9113-d88426227339\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190930 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-serving-cert\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190962 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz9tt\" (UniqueName: \"kubernetes.io/projected/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-kube-api-access-pz9tt\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.190997 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191026 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtjb7\" (UniqueName: \"kubernetes.io/projected/c158e403-ca5b-4c91-99b7-b530be24cd62-kube-api-access-rtjb7\") pod \"machine-config-controller-84d6567774-tq8bc\" (UID: \"c158e403-ca5b-4c91-99b7-b530be24cd62\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191051 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-audit\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191116 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-etcd-serving-ca\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191138 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/827fd14f-134f-46a7-8592-ad91bbb05510-proxy-tls\") pod \"machine-config-operator-74547568cd-hsxbt\" (UID: \"827fd14f-134f-46a7-8592-ad91bbb05510\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191163 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qnc7\" (UniqueName: \"kubernetes.io/projected/79a75184-8ba4-400f-9113-d88426227339-kube-api-access-7qnc7\") pod \"cluster-image-registry-operator-dc59b4c8b-t7nc5\" (UID: \"79a75184-8ba4-400f-9113-d88426227339\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191185 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/588890a2-6538-4f1e-b102-24efd0c91bb8-serving-cert\") pod \"route-controller-manager-6576b87f9c-t82r6\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191207 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191269 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hws8k\" (UniqueName: \"kubernetes.io/projected/8725a2f3-4453-49c6-a55e-2ec80acf982d-kube-api-access-hws8k\") pod \"kube-storage-version-migrator-operator-b67b599dd-qgjwj\" (UID: \"8725a2f3-4453-49c6-a55e-2ec80acf982d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191324 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c158e403-ca5b-4c91-99b7-b530be24cd62-proxy-tls\") pod \"machine-config-controller-84d6567774-tq8bc\" (UID: \"c158e403-ca5b-4c91-99b7-b530be24cd62\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191373 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79a75184-8ba4-400f-9113-d88426227339-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t7nc5\" (UID: \"79a75184-8ba4-400f-9113-d88426227339\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191412 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9wnm\" (UniqueName: \"kubernetes.io/projected/0bf522e7-cf36-4600-812d-8f1c82d9ac16-kube-api-access-m9wnm\") pod \"machine-approver-56656f9798-bmxm4\" (UID: \"0bf522e7-cf36-4600-812d-8f1c82d9ac16\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191442 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-client-ca\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191543 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nlcv\" (UniqueName: \"kubernetes.io/projected/c8737e3b-a2b1-4b23-90bc-5dc758823ae9-kube-api-access-6nlcv\") pod \"downloads-7954f5f757-tn4k9\" (UID: \"c8737e3b-a2b1-4b23-90bc-5dc758823ae9\") " pod="openshift-console/downloads-7954f5f757-tn4k9" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191585 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-image-import-ca\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191639 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/827fd14f-134f-46a7-8592-ad91bbb05510-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hsxbt\" (UID: \"827fd14f-134f-46a7-8592-ad91bbb05510\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191678 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0bf522e7-cf36-4600-812d-8f1c82d9ac16-machine-approver-tls\") pod \"machine-approver-56656f9798-bmxm4\" (UID: \"0bf522e7-cf36-4600-812d-8f1c82d9ac16\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191707 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0bf522e7-cf36-4600-812d-8f1c82d9ac16-auth-proxy-config\") pod \"machine-approver-56656f9798-bmxm4\" (UID: \"0bf522e7-cf36-4600-812d-8f1c82d9ac16\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191728 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-config\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191748 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-etcd-client\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191776 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7ft4\" (UniqueName: \"kubernetes.io/projected/8720fabc-937c-4b06-a3de-bb2719b71d1e-kube-api-access-k7ft4\") pod \"dns-operator-744455d44c-fxs2g\" (UID: \"8720fabc-937c-4b06-a3de-bb2719b71d1e\") " pod="openshift-dns-operator/dns-operator-744455d44c-fxs2g" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191798 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5jzq\" (UniqueName: \"kubernetes.io/projected/588890a2-6538-4f1e-b102-24efd0c91bb8-kube-api-access-j5jzq\") pod \"route-controller-manager-6576b87f9c-t82r6\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191823 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-audit-policies\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191873 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8725a2f3-4453-49c6-a55e-2ec80acf982d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qgjwj\" (UID: \"8725a2f3-4453-49c6-a55e-2ec80acf982d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191916 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191946 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191969 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191992 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mflw\" (UniqueName: \"kubernetes.io/projected/050c15f4-8e90-4961-bb14-eaa0aa868b8c-kube-api-access-8mflw\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.191950 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b4rng"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.192043 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.192168 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-audit-dir\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.192196 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/050c15f4-8e90-4961-bb14-eaa0aa868b8c-audit-dir\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.193268 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.193732 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.193279 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.194945 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.195269 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.195473 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.195521 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.195557 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.199608 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.199790 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.199967 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.200093 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.200149 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.200198 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.200350 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.200453 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.200600 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.201150 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.202303 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.232305 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.234931 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.236363 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.236938 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.237366 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.238258 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.238680 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.240846 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.241354 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.242394 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.242642 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.251457 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.254788 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.255275 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.255644 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.261537 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.262275 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.262592 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.262860 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.263007 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.263257 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.263514 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.263525 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.263831 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.263939 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.264020 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.264314 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.264398 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.263857 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.264322 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.264559 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.264597 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.264648 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.264736 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.264835 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.264974 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.264979 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.265061 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.265200 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.265320 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.269061 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.269368 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.265424 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.265464 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.270338 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.270454 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.265498 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.270888 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.265541 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.265622 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.265653 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.265765 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.265801 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.265840 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.271186 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-794sp"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.271532 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.271770 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.272623 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.275190 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.276805 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.277051 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.277919 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.278117 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.278361 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qbj7r"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.304707 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8mxvp"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.306092 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.307018 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zqrmh"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.307651 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hvzjs"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.307947 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8mxvp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.308129 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.308285 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zqrmh" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.308292 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.308912 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vfvr6"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.309127 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.310121 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bf522e7-cf36-4600-812d-8f1c82d9ac16-config\") pod \"machine-approver-56656f9798-bmxm4\" (UID: \"0bf522e7-cf36-4600-812d-8f1c82d9ac16\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.311255 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bf522e7-cf36-4600-812d-8f1c82d9ac16-config\") pod \"machine-approver-56656f9798-bmxm4\" (UID: \"0bf522e7-cf36-4600-812d-8f1c82d9ac16\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.313344 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5c70a5-db94-4e6e-b18d-a73ab6274154-trusted-ca\") pod \"ingress-operator-5b745b69d9-jxnsc\" (UID: \"1b5c70a5-db94-4e6e-b18d-a73ab6274154\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.313527 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8725a2f3-4453-49c6-a55e-2ec80acf982d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qgjwj\" (UID: \"8725a2f3-4453-49c6-a55e-2ec80acf982d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.313647 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9shvr\" (UniqueName: \"kubernetes.io/projected/e5f2f2e4-5c34-410d-8f03-1b34ac392176-kube-api-access-9shvr\") pod \"openshift-config-operator-7777fb866f-q8tkk\" (UID: \"e5f2f2e4-5c34-410d-8f03-1b34ac392176\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.313755 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9xls\" (UniqueName: \"kubernetes.io/projected/be857bca-e5ac-4d2f-8f62-8ab639f08d51-kube-api-access-n9xls\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.313863 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-service-ca\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.313957 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ab31756-fb86-44da-b02b-54c45f173962-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r9tww\" (UID: \"4ab31756-fb86-44da-b02b-54c45f173962\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.314043 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f2f2e4-5c34-410d-8f03-1b34ac392176-serving-cert\") pod \"openshift-config-operator-7777fb866f-q8tkk\" (UID: \"e5f2f2e4-5c34-410d-8f03-1b34ac392176\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.314132 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be857bca-e5ac-4d2f-8f62-8ab639f08d51-serving-cert\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.314214 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmv27\" (UniqueName: \"kubernetes.io/projected/153235f2-4506-45d4-801f-8c96d28df313-kube-api-access-gmv27\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.314333 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1134e0d-63cf-4b7f-8f90-1d340c68d818-audit-policies\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.314425 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a325a62-edee-4944-8cd5-5e74d2e8d822-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bd5zz\" (UID: \"0a325a62-edee-4944-8cd5-5e74d2e8d822\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.314513 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg6ps\" (UniqueName: \"kubernetes.io/projected/827fd14f-134f-46a7-8592-ad91bbb05510-kube-api-access-gg6ps\") pod \"machine-config-operator-74547568cd-hsxbt\" (UID: \"827fd14f-134f-46a7-8592-ad91bbb05510\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.314598 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c158e403-ca5b-4c91-99b7-b530be24cd62-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tq8bc\" (UID: \"c158e403-ca5b-4c91-99b7-b530be24cd62\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.314688 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.314358 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.314279 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.318598 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pfbbr"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.318777 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c158e403-ca5b-4c91-99b7-b530be24cd62-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tq8bc\" (UID: \"c158e403-ca5b-4c91-99b7-b530be24cd62\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.320115 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-r7wht"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.321993 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-868nw"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.322912 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.323891 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.314768 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324311 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324344 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f47jk\" (UniqueName: \"kubernetes.io/projected/d1134e0d-63cf-4b7f-8f90-1d340c68d818-kube-api-access-f47jk\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324374 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324401 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324423 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e5f2f2e4-5c34-410d-8f03-1b34ac392176-available-featuregates\") pod \"openshift-config-operator-7777fb866f-q8tkk\" (UID: \"e5f2f2e4-5c34-410d-8f03-1b34ac392176\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324448 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-encryption-config\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324484 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324507 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324527 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664ee5fe-5761-497f-80d4-8b9a7059d09a-config\") pod \"console-operator-58897d9998-b4rng\" (UID: \"664ee5fe-5761-497f-80d4-8b9a7059d09a\") " pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324550 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-trusted-ca-bundle\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324572 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/79a75184-8ba4-400f-9113-d88426227339-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t7nc5\" (UID: \"79a75184-8ba4-400f-9113-d88426227339\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324593 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d1134e0d-63cf-4b7f-8f90-1d340c68d818-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324614 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dqbn\" (UniqueName: \"kubernetes.io/projected/8584026b-8df2-4dca-9a5b-ee9a1fe36808-kube-api-access-8dqbn\") pod \"openshift-controller-manager-operator-756b6f6bc6-ch9nz\" (UID: \"8584026b-8df2-4dca-9a5b-ee9a1fe36808\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324638 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fe138880-67a4-4c5a-ba3c-98af7246dcb5-profile-collector-cert\") pod \"catalog-operator-68c6474976-7vvdn\" (UID: \"fe138880-67a4-4c5a-ba3c-98af7246dcb5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324674 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8720fabc-937c-4b06-a3de-bb2719b71d1e-metrics-tls\") pod \"dns-operator-744455d44c-fxs2g\" (UID: \"8720fabc-937c-4b06-a3de-bb2719b71d1e\") " pod="openshift-dns-operator/dns-operator-744455d44c-fxs2g" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324703 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/588890a2-6538-4f1e-b102-24efd0c91bb8-config\") pod \"route-controller-manager-6576b87f9c-t82r6\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324743 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1134e0d-63cf-4b7f-8f90-1d340c68d818-serving-cert\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324765 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrb9j\" (UniqueName: \"kubernetes.io/projected/dd8c1623-2ec1-48f8-a92d-a6d160ee78d7-kube-api-access-wrb9j\") pod \"cluster-samples-operator-665b6dd947-xtgt5\" (UID: \"dd8c1623-2ec1-48f8-a92d-a6d160ee78d7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324784 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a325a62-edee-4944-8cd5-5e74d2e8d822-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bd5zz\" (UID: \"0a325a62-edee-4944-8cd5-5e74d2e8d822\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324801 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-98tt4\" (UID: \"584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324823 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/588890a2-6538-4f1e-b102-24efd0c91bb8-client-ca\") pod \"route-controller-manager-6576b87f9c-t82r6\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324843 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-node-pullsecrets\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324866 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324895 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-config\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324916 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/153235f2-4506-45d4-801f-8c96d28df313-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324953 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ab31756-fb86-44da-b02b-54c45f173962-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r9tww\" (UID: \"4ab31756-fb86-44da-b02b-54c45f173962\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.324982 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/153235f2-4506-45d4-801f-8c96d28df313-service-ca-bundle\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325003 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/827fd14f-134f-46a7-8592-ad91bbb05510-images\") pod \"machine-config-operator-74547568cd-hsxbt\" (UID: \"827fd14f-134f-46a7-8592-ad91bbb05510\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325028 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325046 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79a75184-8ba4-400f-9113-d88426227339-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t7nc5\" (UID: \"79a75184-8ba4-400f-9113-d88426227339\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325063 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-serving-cert\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325110 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz9tt\" (UniqueName: \"kubernetes.io/projected/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-kube-api-access-pz9tt\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325132 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kffg\" (UniqueName: \"kubernetes.io/projected/1b5c70a5-db94-4e6e-b18d-a73ab6274154-kube-api-access-9kffg\") pod \"ingress-operator-5b745b69d9-jxnsc\" (UID: \"1b5c70a5-db94-4e6e-b18d-a73ab6274154\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325153 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57zfs\" (UniqueName: \"kubernetes.io/projected/fe138880-67a4-4c5a-ba3c-98af7246dcb5-kube-api-access-57zfs\") pod \"catalog-operator-68c6474976-7vvdn\" (UID: \"fe138880-67a4-4c5a-ba3c-98af7246dcb5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325174 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtjb7\" (UniqueName: \"kubernetes.io/projected/c158e403-ca5b-4c91-99b7-b530be24cd62-kube-api-access-rtjb7\") pod \"machine-config-controller-84d6567774-tq8bc\" (UID: \"c158e403-ca5b-4c91-99b7-b530be24cd62\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325222 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-audit\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325258 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-etcd-serving-ca\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325280 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df-config\") pod \"kube-controller-manager-operator-78b949d7b-98tt4\" (UID: \"584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325300 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a325a62-edee-4944-8cd5-5e74d2e8d822-config\") pod \"kube-apiserver-operator-766d6c64bb-bd5zz\" (UID: \"0a325a62-edee-4944-8cd5-5e74d2e8d822\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325317 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/827fd14f-134f-46a7-8592-ad91bbb05510-proxy-tls\") pod \"machine-config-operator-74547568cd-hsxbt\" (UID: \"827fd14f-134f-46a7-8592-ad91bbb05510\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325348 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qnc7\" (UniqueName: \"kubernetes.io/projected/79a75184-8ba4-400f-9113-d88426227339-kube-api-access-7qnc7\") pod \"cluster-image-registry-operator-dc59b4c8b-t7nc5\" (UID: \"79a75184-8ba4-400f-9113-d88426227339\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325367 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1134e0d-63cf-4b7f-8f90-1d340c68d818-audit-dir\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325394 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/588890a2-6538-4f1e-b102-24efd0c91bb8-serving-cert\") pod \"route-controller-manager-6576b87f9c-t82r6\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325432 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325462 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcdcf420-00a9-4e74-a62a-20422778b5e0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h2zzv\" (UID: \"bcdcf420-00a9-4e74-a62a-20422778b5e0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325484 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8584026b-8df2-4dca-9a5b-ee9a1fe36808-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ch9nz\" (UID: \"8584026b-8df2-4dca-9a5b-ee9a1fe36808\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.325518 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hws8k\" (UniqueName: \"kubernetes.io/projected/8725a2f3-4453-49c6-a55e-2ec80acf982d-kube-api-access-hws8k\") pod \"kube-storage-version-migrator-operator-b67b599dd-qgjwj\" (UID: \"8725a2f3-4453-49c6-a55e-2ec80acf982d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.326613 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c158e403-ca5b-4c91-99b7-b530be24cd62-proxy-tls\") pod \"machine-config-controller-84d6567774-tq8bc\" (UID: \"c158e403-ca5b-4c91-99b7-b530be24cd62\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.326681 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcdcf420-00a9-4e74-a62a-20422778b5e0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h2zzv\" (UID: \"bcdcf420-00a9-4e74-a62a-20422778b5e0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.326715 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-config\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.333372 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.333632 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.349527 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-r7wht" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.349660 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.353358 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.354713 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-config\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.355770 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/588890a2-6538-4f1e-b102-24efd0c91bb8-client-ca\") pod \"route-controller-manager-6576b87f9c-t82r6\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.356300 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.358073 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e5f2f2e4-5c34-410d-8f03-1b34ac392176-available-featuregates\") pod \"openshift-config-operator-7777fb866f-q8tkk\" (UID: \"e5f2f2e4-5c34-410d-8f03-1b34ac392176\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.361139 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.362870 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79a75184-8ba4-400f-9113-d88426227339-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t7nc5\" (UID: \"79a75184-8ba4-400f-9113-d88426227339\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.362899 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.363980 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.364683 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-audit\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.365308 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-etcd-serving-ca\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.369308 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-node-pullsecrets\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.370358 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/827fd14f-134f-46a7-8592-ad91bbb05510-images\") pod \"machine-config-operator-74547568cd-hsxbt\" (UID: \"827fd14f-134f-46a7-8592-ad91bbb05510\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.372787 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/79a75184-8ba4-400f-9113-d88426227339-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t7nc5\" (UID: \"79a75184-8ba4-400f-9113-d88426227339\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373185 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/588890a2-6538-4f1e-b102-24efd0c91bb8-config\") pod \"route-controller-manager-6576b87f9c-t82r6\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373461 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-oauth-serving-cert\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373565 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab31756-fb86-44da-b02b-54c45f173962-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r9tww\" (UID: \"4ab31756-fb86-44da-b02b-54c45f173962\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373600 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79a75184-8ba4-400f-9113-d88426227339-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t7nc5\" (UID: \"79a75184-8ba4-400f-9113-d88426227339\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373622 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9wnm\" (UniqueName: \"kubernetes.io/projected/0bf522e7-cf36-4600-812d-8f1c82d9ac16-kube-api-access-m9wnm\") pod \"machine-approver-56656f9798-bmxm4\" (UID: \"0bf522e7-cf36-4600-812d-8f1c82d9ac16\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373641 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nlcv\" (UniqueName: \"kubernetes.io/projected/c8737e3b-a2b1-4b23-90bc-5dc758823ae9-kube-api-access-6nlcv\") pod \"downloads-7954f5f757-tn4k9\" (UID: \"c8737e3b-a2b1-4b23-90bc-5dc758823ae9\") " pod="openshift-console/downloads-7954f5f757-tn4k9" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373686 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-image-import-ca\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373708 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-client-ca\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373729 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1134e0d-63cf-4b7f-8f90-1d340c68d818-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373751 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/153235f2-4506-45d4-801f-8c96d28df313-serving-cert\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373786 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-serving-cert\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373811 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/827fd14f-134f-46a7-8592-ad91bbb05510-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hsxbt\" (UID: \"827fd14f-134f-46a7-8592-ad91bbb05510\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373830 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0bf522e7-cf36-4600-812d-8f1c82d9ac16-machine-approver-tls\") pod \"machine-approver-56656f9798-bmxm4\" (UID: \"0bf522e7-cf36-4600-812d-8f1c82d9ac16\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373851 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzhrx\" (UniqueName: \"kubernetes.io/projected/664ee5fe-5761-497f-80d4-8b9a7059d09a-kube-api-access-bzhrx\") pod \"console-operator-58897d9998-b4rng\" (UID: \"664ee5fe-5761-497f-80d4-8b9a7059d09a\") " pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373869 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d1134e0d-63cf-4b7f-8f90-1d340c68d818-etcd-client\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373902 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmfws\" (UniqueName: \"kubernetes.io/projected/bcdcf420-00a9-4e74-a62a-20422778b5e0-kube-api-access-bmfws\") pod \"openshift-apiserver-operator-796bbdcf4f-h2zzv\" (UID: \"bcdcf420-00a9-4e74-a62a-20422778b5e0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373946 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7ft4\" (UniqueName: \"kubernetes.io/projected/8720fabc-937c-4b06-a3de-bb2719b71d1e-kube-api-access-k7ft4\") pod \"dns-operator-744455d44c-fxs2g\" (UID: \"8720fabc-937c-4b06-a3de-bb2719b71d1e\") " pod="openshift-dns-operator/dns-operator-744455d44c-fxs2g" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373969 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5jzq\" (UniqueName: \"kubernetes.io/projected/588890a2-6538-4f1e-b102-24efd0c91bb8-kube-api-access-j5jzq\") pod \"route-controller-manager-6576b87f9c-t82r6\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.373991 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0bf522e7-cf36-4600-812d-8f1c82d9ac16-auth-proxy-config\") pod \"machine-approver-56656f9798-bmxm4\" (UID: \"0bf522e7-cf36-4600-812d-8f1c82d9ac16\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374030 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-config\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374050 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-etcd-client\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374068 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/664ee5fe-5761-497f-80d4-8b9a7059d09a-serving-cert\") pod \"console-operator-58897d9998-b4rng\" (UID: \"664ee5fe-5761-497f-80d4-8b9a7059d09a\") " pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374094 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-oauth-config\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374363 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d1134e0d-63cf-4b7f-8f90-1d340c68d818-encryption-config\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374381 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b5c70a5-db94-4e6e-b18d-a73ab6274154-metrics-tls\") pod \"ingress-operator-5b745b69d9-jxnsc\" (UID: \"1b5c70a5-db94-4e6e-b18d-a73ab6274154\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374405 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-audit-policies\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374435 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8725a2f3-4453-49c6-a55e-2ec80acf982d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qgjwj\" (UID: \"8725a2f3-4453-49c6-a55e-2ec80acf982d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374459 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8584026b-8df2-4dca-9a5b-ee9a1fe36808-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ch9nz\" (UID: \"8584026b-8df2-4dca-9a5b-ee9a1fe36808\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374479 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd8c1623-2ec1-48f8-a92d-a6d160ee78d7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xtgt5\" (UID: \"dd8c1623-2ec1-48f8-a92d-a6d160ee78d7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374519 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374543 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b5c70a5-db94-4e6e-b18d-a73ab6274154-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jxnsc\" (UID: \"1b5c70a5-db94-4e6e-b18d-a73ab6274154\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374563 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374596 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/664ee5fe-5761-497f-80d4-8b9a7059d09a-trusted-ca\") pod \"console-operator-58897d9998-b4rng\" (UID: \"664ee5fe-5761-497f-80d4-8b9a7059d09a\") " pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374616 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374637 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mflw\" (UniqueName: \"kubernetes.io/projected/050c15f4-8e90-4961-bb14-eaa0aa868b8c-kube-api-access-8mflw\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374661 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-audit-dir\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374681 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153235f2-4506-45d4-801f-8c96d28df313-config\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374723 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-98tt4\" (UID: \"584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374748 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fe138880-67a4-4c5a-ba3c-98af7246dcb5-srv-cert\") pod \"catalog-operator-68c6474976-7vvdn\" (UID: \"fe138880-67a4-4c5a-ba3c-98af7246dcb5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374771 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/050c15f4-8e90-4961-bb14-eaa0aa868b8c-audit-dir\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374770 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.374789 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6l6j\" (UniqueName: \"kubernetes.io/projected/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-kube-api-access-h6l6j\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.375481 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/827fd14f-134f-46a7-8592-ad91bbb05510-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hsxbt\" (UID: \"827fd14f-134f-46a7-8592-ad91bbb05510\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.375797 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-image-import-ca\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.376067 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8725a2f3-4453-49c6-a55e-2ec80acf982d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qgjwj\" (UID: \"8725a2f3-4453-49c6-a55e-2ec80acf982d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.376374 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-client-ca\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.376772 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0bf522e7-cf36-4600-812d-8f1c82d9ac16-auth-proxy-config\") pod \"machine-approver-56656f9798-bmxm4\" (UID: \"0bf522e7-cf36-4600-812d-8f1c82d9ac16\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.377195 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-config\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.377972 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.378586 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-audit-policies\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.379096 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.379164 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-audit-dir\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.379217 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/050c15f4-8e90-4961-bb14-eaa0aa868b8c-audit-dir\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.382172 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.383778 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fnp2w"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.384785 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.385569 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.385838 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.385931 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.385976 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.386124 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.386373 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.386560 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.386634 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.386653 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.386991 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.387055 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.387091 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.387832 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8725a2f3-4453-49c6-a55e-2ec80acf982d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qgjwj\" (UID: \"8725a2f3-4453-49c6-a55e-2ec80acf982d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.388493 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f2f2e4-5c34-410d-8f03-1b34ac392176-serving-cert\") pod \"openshift-config-operator-7777fb866f-q8tkk\" (UID: \"e5f2f2e4-5c34-410d-8f03-1b34ac392176\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.389650 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.389950 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.390056 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be857bca-e5ac-4d2f-8f62-8ab639f08d51-serving-cert\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.390073 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.390184 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.390956 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-encryption-config\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.391091 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.391313 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.391460 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.391592 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.391718 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.392758 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.393026 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.394381 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.394823 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.394818 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-serving-cert\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.394989 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-etcd-client\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.395597 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c158e403-ca5b-4c91-99b7-b530be24cd62-proxy-tls\") pod \"machine-config-controller-84d6567774-tq8bc\" (UID: \"c158e403-ca5b-4c91-99b7-b530be24cd62\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.396065 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0bf522e7-cf36-4600-812d-8f1c82d9ac16-machine-approver-tls\") pod \"machine-approver-56656f9798-bmxm4\" (UID: \"0bf522e7-cf36-4600-812d-8f1c82d9ac16\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.396179 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8720fabc-937c-4b06-a3de-bb2719b71d1e-metrics-tls\") pod \"dns-operator-744455d44c-fxs2g\" (UID: \"8720fabc-937c-4b06-a3de-bb2719b71d1e\") " pod="openshift-dns-operator/dns-operator-744455d44c-fxs2g" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.397091 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-k65hc"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.398260 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c2xpx"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.398289 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.398496 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/827fd14f-134f-46a7-8592-ad91bbb05510-proxy-tls\") pod \"machine-config-operator-74547568cd-hsxbt\" (UID: \"827fd14f-134f-46a7-8592-ad91bbb05510\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.398784 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.398784 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.400919 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/588890a2-6538-4f1e-b102-24efd0c91bb8-serving-cert\") pod \"route-controller-manager-6576b87f9c-t82r6\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.401004 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.401042 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.404114 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tn4k9"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.406972 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.407407 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.407456 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.411521 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.412828 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.413868 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fxs2g"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.415033 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.419330 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.422636 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.425046 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-2kv5r"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.425437 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.427265 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.429150 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5gr2f"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.430057 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5gr2f" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.431699 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9fx8s"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.433141 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9fx8s" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.438472 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.446634 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.447465 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.449626 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.452368 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.455345 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b4rng"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.458665 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-k65hc"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.461067 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-794sp"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.464960 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.465611 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.467046 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hvzjs"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.468599 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.471874 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.473579 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hjjzp"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.474995 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.475732 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f4fcbcc9-c1bf-4735-ad50-e999ba4c9159-signing-cabundle\") pod \"service-ca-9c57cc56f-vfvr6\" (UID: \"f4fcbcc9-c1bf-4735-ad50-e999ba4c9159\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.475783 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd8c1623-2ec1-48f8-a92d-a6d160ee78d7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xtgt5\" (UID: \"dd8c1623-2ec1-48f8-a92d-a6d160ee78d7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.475816 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8584026b-8df2-4dca-9a5b-ee9a1fe36808-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ch9nz\" (UID: \"8584026b-8df2-4dca-9a5b-ee9a1fe36808\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.475844 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-98tt4\" (UID: \"584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.475870 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a19e92eb-a898-4589-9cff-652e92e2520e-apiservice-cert\") pod \"packageserver-d55dfcdfc-54zqs\" (UID: \"a19e92eb-a898-4589-9cff-652e92e2520e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.475900 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5hrc\" (UniqueName: \"kubernetes.io/projected/19d86ed9-533c-4a11-a33b-74257f105a6e-kube-api-access-k5hrc\") pod \"collect-profiles-29411430-ckmjm\" (UID: \"19d86ed9-533c-4a11-a33b-74257f105a6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.475925 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6nnc\" (UniqueName: \"kubernetes.io/projected/d71727d2-331b-41f3-889e-71500d2003ae-kube-api-access-s6nnc\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.475951 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5c70a5-db94-4e6e-b18d-a73ab6274154-trusted-ca\") pod \"ingress-operator-5b745b69d9-jxnsc\" (UID: \"1b5c70a5-db94-4e6e-b18d-a73ab6274154\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.475971 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d71727d2-331b-41f3-889e-71500d2003ae-etcd-ca\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.475999 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-service-ca\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476017 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q597p\" (UniqueName: \"kubernetes.io/projected/a784542d-74ab-4b13-90cb-9159a9dbc173-kube-api-access-q597p\") pod \"migrator-59844c95c7-8mxvp\" (UID: \"a784542d-74ab-4b13-90cb-9159a9dbc173\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8mxvp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476036 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmv27\" (UniqueName: \"kubernetes.io/projected/153235f2-4506-45d4-801f-8c96d28df313-kube-api-access-gmv27\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476081 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-default-certificate\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476111 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1134e0d-63cf-4b7f-8f90-1d340c68d818-audit-policies\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476127 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a325a62-edee-4944-8cd5-5e74d2e8d822-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bd5zz\" (UID: \"0a325a62-edee-4944-8cd5-5e74d2e8d822\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476145 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fnp2w\" (UID: \"a6316d49-679c-4393-bae7-5ca27876cce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476262 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19d86ed9-533c-4a11-a33b-74257f105a6e-secret-volume\") pod \"collect-profiles-29411430-ckmjm\" (UID: \"19d86ed9-533c-4a11-a33b-74257f105a6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476286 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f47jk\" (UniqueName: \"kubernetes.io/projected/d1134e0d-63cf-4b7f-8f90-1d340c68d818-kube-api-access-f47jk\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476308 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38511f71-ece2-4870-a0c7-22aa90dc0eb6-serving-cert\") pod \"service-ca-operator-777779d784-868nw\" (UID: \"38511f71-ece2-4870-a0c7-22aa90dc0eb6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476359 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d1134e0d-63cf-4b7f-8f90-1d340c68d818-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476396 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fe138880-67a4-4c5a-ba3c-98af7246dcb5-profile-collector-cert\") pod \"catalog-operator-68c6474976-7vvdn\" (UID: \"fe138880-67a4-4c5a-ba3c-98af7246dcb5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476421 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1134e0d-63cf-4b7f-8f90-1d340c68d818-serving-cert\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476436 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrb9j\" (UniqueName: \"kubernetes.io/projected/dd8c1623-2ec1-48f8-a92d-a6d160ee78d7-kube-api-access-wrb9j\") pod \"cluster-samples-operator-665b6dd947-xtgt5\" (UID: \"dd8c1623-2ec1-48f8-a92d-a6d160ee78d7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476452 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a325a62-edee-4944-8cd5-5e74d2e8d822-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bd5zz\" (UID: \"0a325a62-edee-4944-8cd5-5e74d2e8d822\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476502 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-98tt4\" (UID: \"584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476520 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/153235f2-4506-45d4-801f-8c96d28df313-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476540 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a19e92eb-a898-4589-9cff-652e92e2520e-tmpfs\") pod \"packageserver-d55dfcdfc-54zqs\" (UID: \"a19e92eb-a898-4589-9cff-652e92e2520e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476569 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a19e92eb-a898-4589-9cff-652e92e2520e-webhook-cert\") pod \"packageserver-d55dfcdfc-54zqs\" (UID: \"a19e92eb-a898-4589-9cff-652e92e2520e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476598 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a3f8d529-59a4-4ae7-a24f-f6b9d1881a70-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-r7wht\" (UID: \"a3f8d529-59a4-4ae7-a24f-f6b9d1881a70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r7wht" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476640 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kffg\" (UniqueName: \"kubernetes.io/projected/1b5c70a5-db94-4e6e-b18d-a73ab6274154-kube-api-access-9kffg\") pod \"ingress-operator-5b745b69d9-jxnsc\" (UID: \"1b5c70a5-db94-4e6e-b18d-a73ab6274154\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476661 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57zfs\" (UniqueName: \"kubernetes.io/projected/fe138880-67a4-4c5a-ba3c-98af7246dcb5-kube-api-access-57zfs\") pod \"catalog-operator-68c6474976-7vvdn\" (UID: \"fe138880-67a4-4c5a-ba3c-98af7246dcb5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476769 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a325a62-edee-4944-8cd5-5e74d2e8d822-config\") pod \"kube-apiserver-operator-766d6c64bb-bd5zz\" (UID: \"0a325a62-edee-4944-8cd5-5e74d2e8d822\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476811 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1134e0d-63cf-4b7f-8f90-1d340c68d818-audit-dir\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476834 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njxtm\" (UniqueName: \"kubernetes.io/projected/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-kube-api-access-njxtm\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476851 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcdcf420-00a9-4e74-a62a-20422778b5e0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h2zzv\" (UID: \"bcdcf420-00a9-4e74-a62a-20422778b5e0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476873 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8584026b-8df2-4dca-9a5b-ee9a1fe36808-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ch9nz\" (UID: \"8584026b-8df2-4dca-9a5b-ee9a1fe36808\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476894 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcdcf420-00a9-4e74-a62a-20422778b5e0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h2zzv\" (UID: \"bcdcf420-00a9-4e74-a62a-20422778b5e0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476909 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-config\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476935 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab31756-fb86-44da-b02b-54c45f173962-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r9tww\" (UID: \"4ab31756-fb86-44da-b02b-54c45f173962\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.476955 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-serving-cert\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477016 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d1134e0d-63cf-4b7f-8f90-1d340c68d818-etcd-client\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477100 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/664ee5fe-5761-497f-80d4-8b9a7059d09a-serving-cert\") pod \"console-operator-58897d9998-b4rng\" (UID: \"664ee5fe-5761-497f-80d4-8b9a7059d09a\") " pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477112 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8584026b-8df2-4dca-9a5b-ee9a1fe36808-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ch9nz\" (UID: \"8584026b-8df2-4dca-9a5b-ee9a1fe36808\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477122 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-oauth-config\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477155 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f4fcbcc9-c1bf-4735-ad50-e999ba4c9159-signing-key\") pod \"service-ca-9c57cc56f-vfvr6\" (UID: \"f4fcbcc9-c1bf-4735-ad50-e999ba4c9159\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477181 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19d86ed9-533c-4a11-a33b-74257f105a6e-config-volume\") pod \"collect-profiles-29411430-ckmjm\" (UID: \"19d86ed9-533c-4a11-a33b-74257f105a6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477290 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d1134e0d-63cf-4b7f-8f90-1d340c68d818-encryption-config\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477360 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg9vk\" (UniqueName: \"kubernetes.io/projected/f4fcbcc9-c1bf-4735-ad50-e999ba4c9159-kube-api-access-zg9vk\") pod \"service-ca-9c57cc56f-vfvr6\" (UID: \"f4fcbcc9-c1bf-4735-ad50-e999ba4c9159\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477412 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-service-ca-bundle\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477437 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b5c70a5-db94-4e6e-b18d-a73ab6274154-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jxnsc\" (UID: \"1b5c70a5-db94-4e6e-b18d-a73ab6274154\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477474 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt6rw\" (UniqueName: \"kubernetes.io/projected/2816ec52-510f-49c7-a70e-4e7db428d500-kube-api-access-gt6rw\") pod \"package-server-manager-789f6589d5-nblt7\" (UID: \"2816ec52-510f-49c7-a70e-4e7db428d500\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477540 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/664ee5fe-5761-497f-80d4-8b9a7059d09a-trusted-ca\") pod \"console-operator-58897d9998-b4rng\" (UID: \"664ee5fe-5761-497f-80d4-8b9a7059d09a\") " pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477575 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fe138880-67a4-4c5a-ba3c-98af7246dcb5-srv-cert\") pod \"catalog-operator-68c6474976-7vvdn\" (UID: \"fe138880-67a4-4c5a-ba3c-98af7246dcb5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477632 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqbsj\" (UniqueName: \"kubernetes.io/projected/a3f8d529-59a4-4ae7-a24f-f6b9d1881a70-kube-api-access-rqbsj\") pod \"multus-admission-controller-857f4d67dd-r7wht\" (UID: \"a3f8d529-59a4-4ae7-a24f-f6b9d1881a70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r7wht" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477677 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71727d2-331b-41f3-889e-71500d2003ae-serving-cert\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477743 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153235f2-4506-45d4-801f-8c96d28df313-config\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477807 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6l6j\" (UniqueName: \"kubernetes.io/projected/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-kube-api-access-h6l6j\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477865 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ab31756-fb86-44da-b02b-54c45f173962-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r9tww\" (UID: \"4ab31756-fb86-44da-b02b-54c45f173962\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477909 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpttw\" (UniqueName: \"kubernetes.io/projected/38511f71-ece2-4870-a0c7-22aa90dc0eb6-kube-api-access-qpttw\") pod \"service-ca-operator-777779d784-868nw\" (UID: \"38511f71-ece2-4870-a0c7-22aa90dc0eb6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.477982 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d71727d2-331b-41f3-889e-71500d2003ae-etcd-client\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.478099 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5jzn\" (UniqueName: \"kubernetes.io/projected/a19e92eb-a898-4589-9cff-652e92e2520e-kube-api-access-b5jzn\") pod \"packageserver-d55dfcdfc-54zqs\" (UID: \"a19e92eb-a898-4589-9cff-652e92e2520e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.478798 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab31756-fb86-44da-b02b-54c45f173962-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r9tww\" (UID: \"4ab31756-fb86-44da-b02b-54c45f173962\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.478209 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fnp2w\" (UID: \"a6316d49-679c-4393-bae7-5ca27876cce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.479560 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664ee5fe-5761-497f-80d4-8b9a7059d09a-config\") pod \"console-operator-58897d9998-b4rng\" (UID: \"664ee5fe-5761-497f-80d4-8b9a7059d09a\") " pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.479603 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-trusted-ca-bundle\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.479627 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dqbn\" (UniqueName: \"kubernetes.io/projected/8584026b-8df2-4dca-9a5b-ee9a1fe36808-kube-api-access-8dqbn\") pod \"openshift-controller-manager-operator-756b6f6bc6-ch9nz\" (UID: \"8584026b-8df2-4dca-9a5b-ee9a1fe36808\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.479651 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d71727d2-331b-41f3-889e-71500d2003ae-etcd-service-ca\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.479759 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ab31756-fb86-44da-b02b-54c45f173962-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r9tww\" (UID: \"4ab31756-fb86-44da-b02b-54c45f173962\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.479828 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/153235f2-4506-45d4-801f-8c96d28df313-service-ca-bundle\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.479851 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-metrics-certs\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.479918 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df-config\") pod \"kube-controller-manager-operator-78b949d7b-98tt4\" (UID: \"584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.482867 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1134e0d-63cf-4b7f-8f90-1d340c68d818-audit-dir\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.483936 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664ee5fe-5761-497f-80d4-8b9a7059d09a-config\") pod \"console-operator-58897d9998-b4rng\" (UID: \"664ee5fe-5761-497f-80d4-8b9a7059d09a\") " pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.484049 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcdcf420-00a9-4e74-a62a-20422778b5e0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h2zzv\" (UID: \"bcdcf420-00a9-4e74-a62a-20422778b5e0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.484802 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df-config\") pod \"kube-controller-manager-operator-78b949d7b-98tt4\" (UID: \"584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.487569 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd8c1623-2ec1-48f8-a92d-a6d160ee78d7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xtgt5\" (UID: \"dd8c1623-2ec1-48f8-a92d-a6d160ee78d7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.487609 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ab31756-fb86-44da-b02b-54c45f173962-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r9tww\" (UID: \"4ab31756-fb86-44da-b02b-54c45f173962\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.488752 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/153235f2-4506-45d4-801f-8c96d28df313-service-ca-bundle\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.488991 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71727d2-331b-41f3-889e-71500d2003ae-config\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.489394 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a325a62-edee-4944-8cd5-5e74d2e8d822-config\") pod \"kube-apiserver-operator-766d6c64bb-bd5zz\" (UID: \"0a325a62-edee-4944-8cd5-5e74d2e8d822\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.489474 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8mxvp"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.489501 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-oauth-serving-cert\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.489735 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq9zn\" (UniqueName: \"kubernetes.io/projected/a6316d49-679c-4393-bae7-5ca27876cce4-kube-api-access-wq9zn\") pod \"marketplace-operator-79b997595-fnp2w\" (UID: \"a6316d49-679c-4393-bae7-5ca27876cce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.489792 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.490562 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153235f2-4506-45d4-801f-8c96d28df313-config\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.490657 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5vc2w"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.491449 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5c70a5-db94-4e6e-b18d-a73ab6274154-trusted-ca\") pod \"ingress-operator-5b745b69d9-jxnsc\" (UID: \"1b5c70a5-db94-4e6e-b18d-a73ab6274154\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.491542 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/153235f2-4506-45d4-801f-8c96d28df313-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.492075 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2816ec52-510f-49c7-a70e-4e7db428d500-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nblt7\" (UID: \"2816ec52-510f-49c7-a70e-4e7db428d500\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.492127 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/153235f2-4506-45d4-801f-8c96d28df313-serving-cert\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.492165 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-stats-auth\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.492277 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1134e0d-63cf-4b7f-8f90-1d340c68d818-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.492316 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a325a62-edee-4944-8cd5-5e74d2e8d822-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bd5zz\" (UID: \"0a325a62-edee-4944-8cd5-5e74d2e8d822\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.492355 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmfws\" (UniqueName: \"kubernetes.io/projected/bcdcf420-00a9-4e74-a62a-20422778b5e0-kube-api-access-bmfws\") pod \"openshift-apiserver-operator-796bbdcf4f-h2zzv\" (UID: \"bcdcf420-00a9-4e74-a62a-20422778b5e0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.492664 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzhrx\" (UniqueName: \"kubernetes.io/projected/664ee5fe-5761-497f-80d4-8b9a7059d09a-kube-api-access-bzhrx\") pod \"console-operator-58897d9998-b4rng\" (UID: \"664ee5fe-5761-497f-80d4-8b9a7059d09a\") " pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.492898 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38511f71-ece2-4870-a0c7-22aa90dc0eb6-config\") pod \"service-ca-operator-777779d784-868nw\" (UID: \"38511f71-ece2-4870-a0c7-22aa90dc0eb6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.493190 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b5c70a5-db94-4e6e-b18d-a73ab6274154-metrics-tls\") pod \"ingress-operator-5b745b69d9-jxnsc\" (UID: \"1b5c70a5-db94-4e6e-b18d-a73ab6274154\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.493715 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcdcf420-00a9-4e74-a62a-20422778b5e0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h2zzv\" (UID: \"bcdcf420-00a9-4e74-a62a-20422778b5e0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.494071 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/664ee5fe-5761-497f-80d4-8b9a7059d09a-serving-cert\") pod \"console-operator-58897d9998-b4rng\" (UID: \"664ee5fe-5761-497f-80d4-8b9a7059d09a\") " pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.494408 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-98tt4\" (UID: \"584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.495166 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/664ee5fe-5761-497f-80d4-8b9a7059d09a-trusted-ca\") pod \"console-operator-58897d9998-b4rng\" (UID: \"664ee5fe-5761-497f-80d4-8b9a7059d09a\") " pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.496539 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.498978 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lx28q"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.499725 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8584026b-8df2-4dca-9a5b-ee9a1fe36808-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ch9nz\" (UID: \"8584026b-8df2-4dca-9a5b-ee9a1fe36808\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.499868 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b5c70a5-db94-4e6e-b18d-a73ab6274154-metrics-tls\") pod \"ingress-operator-5b745b69d9-jxnsc\" (UID: \"1b5c70a5-db94-4e6e-b18d-a73ab6274154\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.501910 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/153235f2-4506-45d4-801f-8c96d28df313-serving-cert\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.503431 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mx8k5"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.503850 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.506663 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vfvr6"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.506814 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mx8k5" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.506960 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.508755 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-868nw"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.510803 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-r7wht"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.512659 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fnp2w"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.514057 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pfbbr"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.515219 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mx8k5"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.516347 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zqrmh"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.517510 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.518965 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.520816 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lx28q"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.521680 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9fx8s"] Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.532484 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.542100 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-trusted-ca-bundle\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.545709 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.564489 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.584084 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.593461 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d1134e0d-63cf-4b7f-8f90-1d340c68d818-etcd-client\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.594493 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38511f71-ece2-4870-a0c7-22aa90dc0eb6-config\") pod \"service-ca-operator-777779d784-868nw\" (UID: \"38511f71-ece2-4870-a0c7-22aa90dc0eb6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.594557 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f4fcbcc9-c1bf-4735-ad50-e999ba4c9159-signing-cabundle\") pod \"service-ca-9c57cc56f-vfvr6\" (UID: \"f4fcbcc9-c1bf-4735-ad50-e999ba4c9159\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.594590 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a19e92eb-a898-4589-9cff-652e92e2520e-apiservice-cert\") pod \"packageserver-d55dfcdfc-54zqs\" (UID: \"a19e92eb-a898-4589-9cff-652e92e2520e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.594628 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5hrc\" (UniqueName: \"kubernetes.io/projected/19d86ed9-533c-4a11-a33b-74257f105a6e-kube-api-access-k5hrc\") pod \"collect-profiles-29411430-ckmjm\" (UID: \"19d86ed9-533c-4a11-a33b-74257f105a6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.594662 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d71727d2-331b-41f3-889e-71500d2003ae-etcd-ca\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.594695 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6nnc\" (UniqueName: \"kubernetes.io/projected/d71727d2-331b-41f3-889e-71500d2003ae-kube-api-access-s6nnc\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.594781 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q597p\" (UniqueName: \"kubernetes.io/projected/a784542d-74ab-4b13-90cb-9159a9dbc173-kube-api-access-q597p\") pod \"migrator-59844c95c7-8mxvp\" (UID: \"a784542d-74ab-4b13-90cb-9159a9dbc173\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8mxvp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.594821 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-default-certificate\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.594864 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fnp2w\" (UID: \"a6316d49-679c-4393-bae7-5ca27876cce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.594893 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19d86ed9-533c-4a11-a33b-74257f105a6e-secret-volume\") pod \"collect-profiles-29411430-ckmjm\" (UID: \"19d86ed9-533c-4a11-a33b-74257f105a6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.594948 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38511f71-ece2-4870-a0c7-22aa90dc0eb6-serving-cert\") pod \"service-ca-operator-777779d784-868nw\" (UID: \"38511f71-ece2-4870-a0c7-22aa90dc0eb6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595056 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a19e92eb-a898-4589-9cff-652e92e2520e-tmpfs\") pod \"packageserver-d55dfcdfc-54zqs\" (UID: \"a19e92eb-a898-4589-9cff-652e92e2520e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595091 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a19e92eb-a898-4589-9cff-652e92e2520e-webhook-cert\") pod \"packageserver-d55dfcdfc-54zqs\" (UID: \"a19e92eb-a898-4589-9cff-652e92e2520e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595118 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a3f8d529-59a4-4ae7-a24f-f6b9d1881a70-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-r7wht\" (UID: \"a3f8d529-59a4-4ae7-a24f-f6b9d1881a70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r7wht" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595196 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njxtm\" (UniqueName: \"kubernetes.io/projected/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-kube-api-access-njxtm\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595294 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f4fcbcc9-c1bf-4735-ad50-e999ba4c9159-signing-key\") pod \"service-ca-9c57cc56f-vfvr6\" (UID: \"f4fcbcc9-c1bf-4735-ad50-e999ba4c9159\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595320 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19d86ed9-533c-4a11-a33b-74257f105a6e-config-volume\") pod \"collect-profiles-29411430-ckmjm\" (UID: \"19d86ed9-533c-4a11-a33b-74257f105a6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595380 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-service-ca-bundle\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595410 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg9vk\" (UniqueName: \"kubernetes.io/projected/f4fcbcc9-c1bf-4735-ad50-e999ba4c9159-kube-api-access-zg9vk\") pod \"service-ca-9c57cc56f-vfvr6\" (UID: \"f4fcbcc9-c1bf-4735-ad50-e999ba4c9159\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595438 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt6rw\" (UniqueName: \"kubernetes.io/projected/2816ec52-510f-49c7-a70e-4e7db428d500-kube-api-access-gt6rw\") pod \"package-server-manager-789f6589d5-nblt7\" (UID: \"2816ec52-510f-49c7-a70e-4e7db428d500\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595493 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqbsj\" (UniqueName: \"kubernetes.io/projected/a3f8d529-59a4-4ae7-a24f-f6b9d1881a70-kube-api-access-rqbsj\") pod \"multus-admission-controller-857f4d67dd-r7wht\" (UID: \"a3f8d529-59a4-4ae7-a24f-f6b9d1881a70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r7wht" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595519 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71727d2-331b-41f3-889e-71500d2003ae-serving-cert\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595563 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpttw\" (UniqueName: \"kubernetes.io/projected/38511f71-ece2-4870-a0c7-22aa90dc0eb6-kube-api-access-qpttw\") pod \"service-ca-operator-777779d784-868nw\" (UID: \"38511f71-ece2-4870-a0c7-22aa90dc0eb6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595593 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5jzn\" (UniqueName: \"kubernetes.io/projected/a19e92eb-a898-4589-9cff-652e92e2520e-kube-api-access-b5jzn\") pod \"packageserver-d55dfcdfc-54zqs\" (UID: \"a19e92eb-a898-4589-9cff-652e92e2520e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595618 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d71727d2-331b-41f3-889e-71500d2003ae-etcd-client\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595649 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fnp2w\" (UID: \"a6316d49-679c-4393-bae7-5ca27876cce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595688 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d71727d2-331b-41f3-889e-71500d2003ae-etcd-service-ca\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595687 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a19e92eb-a898-4589-9cff-652e92e2520e-tmpfs\") pod \"packageserver-d55dfcdfc-54zqs\" (UID: \"a19e92eb-a898-4589-9cff-652e92e2520e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595732 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-metrics-certs\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595836 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71727d2-331b-41f3-889e-71500d2003ae-config\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595939 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq9zn\" (UniqueName: \"kubernetes.io/projected/a6316d49-679c-4393-bae7-5ca27876cce4-kube-api-access-wq9zn\") pod \"marketplace-operator-79b997595-fnp2w\" (UID: \"a6316d49-679c-4393-bae7-5ca27876cce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.595968 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2816ec52-510f-49c7-a70e-4e7db428d500-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nblt7\" (UID: \"2816ec52-510f-49c7-a70e-4e7db428d500\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.596008 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-stats-auth\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.605100 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.619666 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1134e0d-63cf-4b7f-8f90-1d340c68d818-serving-cert\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.624512 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.645572 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.655304 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d1134e0d-63cf-4b7f-8f90-1d340c68d818-encryption-config\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.664868 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.671199 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1134e0d-63cf-4b7f-8f90-1d340c68d818-audit-policies\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.684521 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.691993 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-serving-cert\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.704978 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.709971 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d1134e0d-63cf-4b7f-8f90-1d340c68d818-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.724655 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.735530 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-oauth-config\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.744438 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.754092 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1134e0d-63cf-4b7f-8f90-1d340c68d818-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.765218 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.785452 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.794390 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-oauth-serving-cert\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.805064 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.810618 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-config\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.826271 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.828115 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-service-ca\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.865951 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.886195 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.905013 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.908711 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19d86ed9-533c-4a11-a33b-74257f105a6e-secret-volume\") pod \"collect-profiles-29411430-ckmjm\" (UID: \"19d86ed9-533c-4a11-a33b-74257f105a6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.917431 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fe138880-67a4-4c5a-ba3c-98af7246dcb5-profile-collector-cert\") pod \"catalog-operator-68c6474976-7vvdn\" (UID: \"fe138880-67a4-4c5a-ba3c-98af7246dcb5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.925311 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.946123 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.951836 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fe138880-67a4-4c5a-ba3c-98af7246dcb5-srv-cert\") pod \"catalog-operator-68c6474976-7vvdn\" (UID: \"fe138880-67a4-4c5a-ba3c-98af7246dcb5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.965074 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.986124 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 02 14:31:20 crc kubenswrapper[4814]: I1202 14:31:20.999503 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a19e92eb-a898-4589-9cff-652e92e2520e-apiservice-cert\") pod \"packageserver-d55dfcdfc-54zqs\" (UID: \"a19e92eb-a898-4589-9cff-652e92e2520e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.000252 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a19e92eb-a898-4589-9cff-652e92e2520e-webhook-cert\") pod \"packageserver-d55dfcdfc-54zqs\" (UID: \"a19e92eb-a898-4589-9cff-652e92e2520e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.004841 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.024414 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.045656 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.064899 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.070556 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-stats-auth\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.085384 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.106345 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.121039 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-metrics-certs\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.125604 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.144548 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.164415 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.186703 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.205083 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.210081 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-default-certificate\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.226479 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.245313 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.247720 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-service-ca-bundle\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.265677 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.287413 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.306053 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.323972 4814 request.go:700] Waited for 1.008689353s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&limit=500&resourceVersion=0 Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.327158 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.336483 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f4fcbcc9-c1bf-4735-ad50-e999ba4c9159-signing-cabundle\") pod \"service-ca-9c57cc56f-vfvr6\" (UID: \"f4fcbcc9-c1bf-4735-ad50-e999ba4c9159\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.360843 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg6ps\" (UniqueName: \"kubernetes.io/projected/827fd14f-134f-46a7-8592-ad91bbb05510-kube-api-access-gg6ps\") pod \"machine-config-operator-74547568cd-hsxbt\" (UID: \"827fd14f-134f-46a7-8592-ad91bbb05510\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.372997 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.392621 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9xls\" (UniqueName: \"kubernetes.io/projected/be857bca-e5ac-4d2f-8f62-8ab639f08d51-kube-api-access-n9xls\") pod \"controller-manager-879f6c89f-c2xpx\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.404480 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9shvr\" (UniqueName: \"kubernetes.io/projected/e5f2f2e4-5c34-410d-8f03-1b34ac392176-kube-api-access-9shvr\") pod \"openshift-config-operator-7777fb866f-q8tkk\" (UID: \"e5f2f2e4-5c34-410d-8f03-1b34ac392176\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.404990 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.407116 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19d86ed9-533c-4a11-a33b-74257f105a6e-config-volume\") pod \"collect-profiles-29411430-ckmjm\" (UID: \"19d86ed9-533c-4a11-a33b-74257f105a6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.425976 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.444990 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.466252 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.485964 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.505504 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.525620 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.530911 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d71727d2-331b-41f3-889e-71500d2003ae-etcd-client\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.545481 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.547302 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71727d2-331b-41f3-889e-71500d2003ae-config\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.561116 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt"] Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.565469 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.577315 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d71727d2-331b-41f3-889e-71500d2003ae-etcd-ca\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.585759 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.595345 4814 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.595383 4814 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.595401 4814 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.595354 4814 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.595468 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3f8d529-59a4-4ae7-a24f-f6b9d1881a70-webhook-certs podName:a3f8d529-59a4-4ae7-a24f-f6b9d1881a70 nodeName:}" failed. No retries permitted until 2025-12-02 14:31:22.095439836 +0000 UTC m=+144.219802040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a3f8d529-59a4-4ae7-a24f-f6b9d1881a70-webhook-certs") pod "multus-admission-controller-857f4d67dd-r7wht" (UID: "a3f8d529-59a4-4ae7-a24f-f6b9d1881a70") : failed to sync secret cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.595507 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38511f71-ece2-4870-a0c7-22aa90dc0eb6-serving-cert podName:38511f71-ece2-4870-a0c7-22aa90dc0eb6 nodeName:}" failed. No retries permitted until 2025-12-02 14:31:22.095480347 +0000 UTC m=+144.219842551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/38511f71-ece2-4870-a0c7-22aa90dc0eb6-serving-cert") pod "service-ca-operator-777779d784-868nw" (UID: "38511f71-ece2-4870-a0c7-22aa90dc0eb6") : failed to sync secret cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.595547 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-trusted-ca podName:a6316d49-679c-4393-bae7-5ca27876cce4 nodeName:}" failed. No retries permitted until 2025-12-02 14:31:22.095539048 +0000 UTC m=+144.219901252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-trusted-ca") pod "marketplace-operator-79b997595-fnp2w" (UID: "a6316d49-679c-4393-bae7-5ca27876cce4") : failed to sync configmap cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.595549 4814 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.595568 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38511f71-ece2-4870-a0c7-22aa90dc0eb6-config podName:38511f71-ece2-4870-a0c7-22aa90dc0eb6 nodeName:}" failed. No retries permitted until 2025-12-02 14:31:22.095559319 +0000 UTC m=+144.219921523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/38511f71-ece2-4870-a0c7-22aa90dc0eb6-config") pod "service-ca-operator-777779d784-868nw" (UID: "38511f71-ece2-4870-a0c7-22aa90dc0eb6") : failed to sync configmap cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.595595 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4fcbcc9-c1bf-4735-ad50-e999ba4c9159-signing-key podName:f4fcbcc9-c1bf-4735-ad50-e999ba4c9159 nodeName:}" failed. No retries permitted until 2025-12-02 14:31:22.095583779 +0000 UTC m=+144.219945983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/f4fcbcc9-c1bf-4735-ad50-e999ba4c9159-signing-key") pod "service-ca-9c57cc56f-vfvr6" (UID: "f4fcbcc9-c1bf-4735-ad50-e999ba4c9159") : failed to sync secret cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.595719 4814 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.595755 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d71727d2-331b-41f3-889e-71500d2003ae-serving-cert podName:d71727d2-331b-41f3-889e-71500d2003ae nodeName:}" failed. No retries permitted until 2025-12-02 14:31:22.095747453 +0000 UTC m=+144.220109657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d71727d2-331b-41f3-889e-71500d2003ae-serving-cert") pod "etcd-operator-b45778765-pfbbr" (UID: "d71727d2-331b-41f3-889e-71500d2003ae") : failed to sync secret cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.598393 4814 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.598442 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-operator-metrics podName:a6316d49-679c-4393-bae7-5ca27876cce4 nodeName:}" failed. No retries permitted until 2025-12-02 14:31:22.098431052 +0000 UTC m=+144.222793256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-operator-metrics") pod "marketplace-operator-79b997595-fnp2w" (UID: "a6316d49-679c-4393-bae7-5ca27876cce4") : failed to sync secret cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.598497 4814 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.598514 4814 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.598609 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d71727d2-331b-41f3-889e-71500d2003ae-etcd-service-ca podName:d71727d2-331b-41f3-889e-71500d2003ae nodeName:}" failed. No retries permitted until 2025-12-02 14:31:22.098581116 +0000 UTC m=+144.222943320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/d71727d2-331b-41f3-889e-71500d2003ae-etcd-service-ca") pod "etcd-operator-b45778765-pfbbr" (UID: "d71727d2-331b-41f3-889e-71500d2003ae") : failed to sync configmap cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: E1202 14:31:21.598631 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2816ec52-510f-49c7-a70e-4e7db428d500-package-server-manager-serving-cert podName:2816ec52-510f-49c7-a70e-4e7db428d500 nodeName:}" failed. No retries permitted until 2025-12-02 14:31:22.098622277 +0000 UTC m=+144.222984471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2816ec52-510f-49c7-a70e-4e7db428d500-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-nblt7" (UID: "2816ec52-510f-49c7-a70e-4e7db428d500") : failed to sync secret cache: timed out waiting for the condition Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.605369 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.624503 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.645375 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.648932 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.651187 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.665590 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.685807 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.707376 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.733856 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.745643 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.766573 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.785122 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.806881 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.820923 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk"] Dec 02 14:31:21 crc kubenswrapper[4814]: W1202 14:31:21.831601 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5f2f2e4_5c34_410d_8f03_1b34ac392176.slice/crio-68507343e87546017ec54e9e29248035763410bf6c02d832e4b57b3de37f71b6 WatchSource:0}: Error finding container 68507343e87546017ec54e9e29248035763410bf6c02d832e4b57b3de37f71b6: Status 404 returned error can't find the container with id 68507343e87546017ec54e9e29248035763410bf6c02d832e4b57b3de37f71b6 Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.845136 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtjb7\" (UniqueName: \"kubernetes.io/projected/c158e403-ca5b-4c91-99b7-b530be24cd62-kube-api-access-rtjb7\") pod \"machine-config-controller-84d6567774-tq8bc\" (UID: \"c158e403-ca5b-4c91-99b7-b530be24cd62\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.861870 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz9tt\" (UniqueName: \"kubernetes.io/projected/1addd1ce-f5a7-4a84-86c9-6530113a1ea2-kube-api-access-pz9tt\") pod \"apiserver-76f77b778f-5vc2w\" (UID: \"1addd1ce-f5a7-4a84-86c9-6530113a1ea2\") " pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.865731 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.865847 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c2xpx"] Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.901139 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79a75184-8ba4-400f-9113-d88426227339-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t7nc5\" (UID: \"79a75184-8ba4-400f-9113-d88426227339\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.924868 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7ft4\" (UniqueName: \"kubernetes.io/projected/8720fabc-937c-4b06-a3de-bb2719b71d1e-kube-api-access-k7ft4\") pod \"dns-operator-744455d44c-fxs2g\" (UID: \"8720fabc-937c-4b06-a3de-bb2719b71d1e\") " pod="openshift-dns-operator/dns-operator-744455d44c-fxs2g" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.928651 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-fxs2g" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.931910 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" event={"ID":"827fd14f-134f-46a7-8592-ad91bbb05510","Type":"ContainerStarted","Data":"54dac9ccf23b8bc65fde0ba62fac51491987d36d1146f3d6a3df93bd8f8d4ec2"} Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.932582 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" event={"ID":"e5f2f2e4-5c34-410d-8f03-1b34ac392176","Type":"ContainerStarted","Data":"68507343e87546017ec54e9e29248035763410bf6c02d832e4b57b3de37f71b6"} Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.940904 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9wnm\" (UniqueName: \"kubernetes.io/projected/0bf522e7-cf36-4600-812d-8f1c82d9ac16-kube-api-access-m9wnm\") pod \"machine-approver-56656f9798-bmxm4\" (UID: \"0bf522e7-cf36-4600-812d-8f1c82d9ac16\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.961123 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5jzq\" (UniqueName: \"kubernetes.io/projected/588890a2-6538-4f1e-b102-24efd0c91bb8-kube-api-access-j5jzq\") pod \"route-controller-manager-6576b87f9c-t82r6\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.966924 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.980374 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.980787 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nlcv\" (UniqueName: \"kubernetes.io/projected/c8737e3b-a2b1-4b23-90bc-5dc758823ae9-kube-api-access-6nlcv\") pod \"downloads-7954f5f757-tn4k9\" (UID: \"c8737e3b-a2b1-4b23-90bc-5dc758823ae9\") " pod="openshift-console/downloads-7954f5f757-tn4k9" Dec 02 14:31:21 crc kubenswrapper[4814]: I1202 14:31:21.999283 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mflw\" (UniqueName: \"kubernetes.io/projected/050c15f4-8e90-4961-bb14-eaa0aa868b8c-kube-api-access-8mflw\") pod \"oauth-openshift-558db77b4-hjjzp\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.024501 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.042861 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qnc7\" (UniqueName: \"kubernetes.io/projected/79a75184-8ba4-400f-9113-d88426227339-kube-api-access-7qnc7\") pod \"cluster-image-registry-operator-dc59b4c8b-t7nc5\" (UID: \"79a75184-8ba4-400f-9113-d88426227339\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.046328 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 02 14:31:22 crc kubenswrapper[4814]: W1202 14:31:22.047698 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe857bca_e5ac_4d2f_8f62_8ab639f08d51.slice/crio-aaff7296d60e3e6f300437a7f5a29d32e9378e7c643840a1e05a286aa8dcc12f WatchSource:0}: Error finding container aaff7296d60e3e6f300437a7f5a29d32e9378e7c643840a1e05a286aa8dcc12f: Status 404 returned error can't find the container with id aaff7296d60e3e6f300437a7f5a29d32e9378e7c643840a1e05a286aa8dcc12f Dec 02 14:31:22 crc kubenswrapper[4814]: W1202 14:31:22.109327 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bf522e7_cf36_4600_812d_8f1c82d9ac16.slice/crio-02a5bb008889ce861a7074791e4fd3f5cea7fcf5f15de95e1b0b20b4eaf9cf97 WatchSource:0}: Error finding container 02a5bb008889ce861a7074791e4fd3f5cea7fcf5f15de95e1b0b20b4eaf9cf97: Status 404 returned error can't find the container with id 02a5bb008889ce861a7074791e4fd3f5cea7fcf5f15de95e1b0b20b4eaf9cf97 Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.110969 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.112323 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.117959 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.124787 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.130902 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2816ec52-510f-49c7-a70e-4e7db428d500-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nblt7\" (UID: \"2816ec52-510f-49c7-a70e-4e7db428d500\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.131014 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38511f71-ece2-4870-a0c7-22aa90dc0eb6-config\") pod \"service-ca-operator-777779d784-868nw\" (UID: \"38511f71-ece2-4870-a0c7-22aa90dc0eb6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.131157 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fnp2w\" (UID: \"a6316d49-679c-4393-bae7-5ca27876cce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.131219 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38511f71-ece2-4870-a0c7-22aa90dc0eb6-serving-cert\") pod \"service-ca-operator-777779d784-868nw\" (UID: \"38511f71-ece2-4870-a0c7-22aa90dc0eb6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.131370 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a3f8d529-59a4-4ae7-a24f-f6b9d1881a70-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-r7wht\" (UID: \"a3f8d529-59a4-4ae7-a24f-f6b9d1881a70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r7wht" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.131519 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f4fcbcc9-c1bf-4735-ad50-e999ba4c9159-signing-key\") pod \"service-ca-9c57cc56f-vfvr6\" (UID: \"f4fcbcc9-c1bf-4735-ad50-e999ba4c9159\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.131642 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71727d2-331b-41f3-889e-71500d2003ae-serving-cert\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.131746 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fnp2w\" (UID: \"a6316d49-679c-4393-bae7-5ca27876cce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.131801 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d71727d2-331b-41f3-889e-71500d2003ae-etcd-service-ca\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.133089 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d71727d2-331b-41f3-889e-71500d2003ae-etcd-service-ca\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.133624 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fnp2w\" (UID: \"a6316d49-679c-4393-bae7-5ca27876cce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.136102 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38511f71-ece2-4870-a0c7-22aa90dc0eb6-config\") pod \"service-ca-operator-777779d784-868nw\" (UID: \"38511f71-ece2-4870-a0c7-22aa90dc0eb6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.144597 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38511f71-ece2-4870-a0c7-22aa90dc0eb6-serving-cert\") pod \"service-ca-operator-777779d784-868nw\" (UID: \"38511f71-ece2-4870-a0c7-22aa90dc0eb6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.138473 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a3f8d529-59a4-4ae7-a24f-f6b9d1881a70-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-r7wht\" (UID: \"a3f8d529-59a4-4ae7-a24f-f6b9d1881a70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r7wht" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.147921 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2816ec52-510f-49c7-a70e-4e7db428d500-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nblt7\" (UID: \"2816ec52-510f-49c7-a70e-4e7db428d500\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.149035 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hws8k\" (UniqueName: \"kubernetes.io/projected/8725a2f3-4453-49c6-a55e-2ec80acf982d-kube-api-access-hws8k\") pod \"kube-storage-version-migrator-operator-b67b599dd-qgjwj\" (UID: \"8725a2f3-4453-49c6-a55e-2ec80acf982d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.149221 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f4fcbcc9-c1bf-4735-ad50-e999ba4c9159-signing-key\") pod \"service-ca-9c57cc56f-vfvr6\" (UID: \"f4fcbcc9-c1bf-4735-ad50-e999ba4c9159\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.149990 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fnp2w\" (UID: \"a6316d49-679c-4393-bae7-5ca27876cce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.151842 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71727d2-331b-41f3-889e-71500d2003ae-serving-cert\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.166140 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.186021 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.216915 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.219336 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.225032 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.240985 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tn4k9" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.249018 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.257336 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.266163 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.286094 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.292185 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.301635 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.308585 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.315550 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc"] Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.324970 4814 request.go:700] Waited for 1.891498535s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.333905 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.344519 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.345516 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fxs2g"] Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.366241 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.387984 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 02 14:31:22 crc kubenswrapper[4814]: W1202 14:31:22.398708 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8720fabc_937c_4b06_a3de_bb2719b71d1e.slice/crio-d4ca12e026328081f725d86377bbe71774d920c3f51a98e75626443513119007 WatchSource:0}: Error finding container d4ca12e026328081f725d86377bbe71774d920c3f51a98e75626443513119007: Status 404 returned error can't find the container with id d4ca12e026328081f725d86377bbe71774d920c3f51a98e75626443513119007 Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.440289 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kffg\" (UniqueName: \"kubernetes.io/projected/1b5c70a5-db94-4e6e-b18d-a73ab6274154-kube-api-access-9kffg\") pod \"ingress-operator-5b745b69d9-jxnsc\" (UID: \"1b5c70a5-db94-4e6e-b18d-a73ab6274154\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.448930 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b5c70a5-db94-4e6e-b18d-a73ab6274154-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jxnsc\" (UID: \"1b5c70a5-db94-4e6e-b18d-a73ab6274154\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.468852 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5vc2w"] Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.473060 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57zfs\" (UniqueName: \"kubernetes.io/projected/fe138880-67a4-4c5a-ba3c-98af7246dcb5-kube-api-access-57zfs\") pod \"catalog-operator-68c6474976-7vvdn\" (UID: \"fe138880-67a4-4c5a-ba3c-98af7246dcb5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.486629 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrb9j\" (UniqueName: \"kubernetes.io/projected/dd8c1623-2ec1-48f8-a92d-a6d160ee78d7-kube-api-access-wrb9j\") pod \"cluster-samples-operator-665b6dd947-xtgt5\" (UID: \"dd8c1623-2ec1-48f8-a92d-a6d160ee78d7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.524490 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a325a62-edee-4944-8cd5-5e74d2e8d822-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bd5zz\" (UID: \"0a325a62-edee-4944-8cd5-5e74d2e8d822\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.529753 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dqbn\" (UniqueName: \"kubernetes.io/projected/8584026b-8df2-4dca-9a5b-ee9a1fe36808-kube-api-access-8dqbn\") pod \"openshift-controller-manager-operator-756b6f6bc6-ch9nz\" (UID: \"8584026b-8df2-4dca-9a5b-ee9a1fe36808\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.562931 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmv27\" (UniqueName: \"kubernetes.io/projected/153235f2-4506-45d4-801f-8c96d28df313-kube-api-access-gmv27\") pod \"authentication-operator-69f744f599-2kv5r\" (UID: \"153235f2-4506-45d4-801f-8c96d28df313\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.571314 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-98tt4\" (UID: \"584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.584560 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f47jk\" (UniqueName: \"kubernetes.io/projected/d1134e0d-63cf-4b7f-8f90-1d340c68d818-kube-api-access-f47jk\") pod \"apiserver-7bbb656c7d-xc88w\" (UID: \"d1134e0d-63cf-4b7f-8f90-1d340c68d818\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.602355 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.611106 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6l6j\" (UniqueName: \"kubernetes.io/projected/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-kube-api-access-h6l6j\") pod \"console-f9d7485db-794sp\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.621537 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ab31756-fb86-44da-b02b-54c45f173962-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r9tww\" (UID: \"4ab31756-fb86-44da-b02b-54c45f173962\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.624506 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.636469 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.641045 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.643299 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmfws\" (UniqueName: \"kubernetes.io/projected/bcdcf420-00a9-4e74-a62a-20422778b5e0-kube-api-access-bmfws\") pod \"openshift-apiserver-operator-796bbdcf4f-h2zzv\" (UID: \"bcdcf420-00a9-4e74-a62a-20422778b5e0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.650120 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.657317 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.661493 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzhrx\" (UniqueName: \"kubernetes.io/projected/664ee5fe-5761-497f-80d4-8b9a7059d09a-kube-api-access-bzhrx\") pod \"console-operator-58897d9998-b4rng\" (UID: \"664ee5fe-5761-497f-80d4-8b9a7059d09a\") " pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.666336 4814 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.672210 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.675429 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.688745 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.691794 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.692766 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.706117 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.709190 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.716331 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6"] Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.727660 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.746360 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.766735 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.766898 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5"] Dec 02 14:31:22 crc kubenswrapper[4814]: W1202 14:31:22.789765 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod588890a2_6538_4f1e_b102_24efd0c91bb8.slice/crio-2484fd4070f10d5db32ab5524322a9eec8619413ad2c07638148f2bcb986b3e5 WatchSource:0}: Error finding container 2484fd4070f10d5db32ab5524322a9eec8619413ad2c07638148f2bcb986b3e5: Status 404 returned error can't find the container with id 2484fd4070f10d5db32ab5524322a9eec8619413ad2c07638148f2bcb986b3e5 Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.800113 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hjjzp"] Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.804677 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5hrc\" (UniqueName: \"kubernetes.io/projected/19d86ed9-533c-4a11-a33b-74257f105a6e-kube-api-access-k5hrc\") pod \"collect-profiles-29411430-ckmjm\" (UID: \"19d86ed9-533c-4a11-a33b-74257f105a6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.820997 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tn4k9"] Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.827793 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6nnc\" (UniqueName: \"kubernetes.io/projected/d71727d2-331b-41f3-889e-71500d2003ae-kube-api-access-s6nnc\") pod \"etcd-operator-b45778765-pfbbr\" (UID: \"d71727d2-331b-41f3-889e-71500d2003ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.852846 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q597p\" (UniqueName: \"kubernetes.io/projected/a784542d-74ab-4b13-90cb-9159a9dbc173-kube-api-access-q597p\") pod \"migrator-59844c95c7-8mxvp\" (UID: \"a784542d-74ab-4b13-90cb-9159a9dbc173\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8mxvp" Dec 02 14:31:22 crc kubenswrapper[4814]: W1202 14:31:22.858739 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79a75184_8ba4_400f_9113_d88426227339.slice/crio-9b7017fdf00aee68ec327bc85cdc6f7d446d4073ad98a63be5c6abc310e84d96 WatchSource:0}: Error finding container 9b7017fdf00aee68ec327bc85cdc6f7d446d4073ad98a63be5c6abc310e84d96: Status 404 returned error can't find the container with id 9b7017fdf00aee68ec327bc85cdc6f7d446d4073ad98a63be5c6abc310e84d96 Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.864261 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njxtm\" (UniqueName: \"kubernetes.io/projected/9f6f57f3-b8c7-4c71-b7bc-627e6ba30686-kube-api-access-njxtm\") pod \"router-default-5444994796-qbj7r\" (UID: \"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686\") " pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.884755 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg9vk\" (UniqueName: \"kubernetes.io/projected/f4fcbcc9-c1bf-4735-ad50-e999ba4c9159-kube-api-access-zg9vk\") pod \"service-ca-9c57cc56f-vfvr6\" (UID: \"f4fcbcc9-c1bf-4735-ad50-e999ba4c9159\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.885646 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj"] Dec 02 14:31:22 crc kubenswrapper[4814]: W1202 14:31:22.900588 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8737e3b_a2b1_4b23_90bc_5dc758823ae9.slice/crio-6a9897bcd5ccdcfb0dd13432ea878b1833fc7fc1d557b18b21cb2a5636ca9985 WatchSource:0}: Error finding container 6a9897bcd5ccdcfb0dd13432ea878b1833fc7fc1d557b18b21cb2a5636ca9985: Status 404 returned error can't find the container with id 6a9897bcd5ccdcfb0dd13432ea878b1833fc7fc1d557b18b21cb2a5636ca9985 Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.902529 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt6rw\" (UniqueName: \"kubernetes.io/projected/2816ec52-510f-49c7-a70e-4e7db428d500-kube-api-access-gt6rw\") pod \"package-server-manager-789f6589d5-nblt7\" (UID: \"2816ec52-510f-49c7-a70e-4e7db428d500\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.919450 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:22 crc kubenswrapper[4814]: W1202 14:31:22.919831 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod050c15f4_8e90_4961_bb14_eaa0aa868b8c.slice/crio-89a9784bdfb52b87c600d4183fc9a5398ce0e780343d0b7d9f82fa24ccccd78f WatchSource:0}: Error finding container 89a9784bdfb52b87c600d4183fc9a5398ce0e780343d0b7d9f82fa24ccccd78f: Status 404 returned error can't find the container with id 89a9784bdfb52b87c600d4183fc9a5398ce0e780343d0b7d9f82fa24ccccd78f Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.934551 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpttw\" (UniqueName: \"kubernetes.io/projected/38511f71-ece2-4870-a0c7-22aa90dc0eb6-kube-api-access-qpttw\") pod \"service-ca-operator-777779d784-868nw\" (UID: \"38511f71-ece2-4870-a0c7-22aa90dc0eb6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" Dec 02 14:31:22 crc kubenswrapper[4814]: W1202 14:31:22.939684 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8725a2f3_4453_49c6_a55e_2ec80acf982d.slice/crio-d48b1363ea493f4b6b73f42f23058addd3f6317d0699ba705074dc1dd4026011 WatchSource:0}: Error finding container d48b1363ea493f4b6b73f42f23058addd3f6317d0699ba705074dc1dd4026011: Status 404 returned error can't find the container with id d48b1363ea493f4b6b73f42f23058addd3f6317d0699ba705074dc1dd4026011 Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.950111 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5jzn\" (UniqueName: \"kubernetes.io/projected/a19e92eb-a898-4589-9cff-652e92e2520e-kube-api-access-b5jzn\") pod \"packageserver-d55dfcdfc-54zqs\" (UID: \"a19e92eb-a898-4589-9cff-652e92e2520e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.956428 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" event={"ID":"0bf522e7-cf36-4600-812d-8f1c82d9ac16","Type":"ContainerStarted","Data":"83d7a9bc445abef868cc218731606803252469b285b3377270ae67ba1894df47"} Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.956473 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" event={"ID":"0bf522e7-cf36-4600-812d-8f1c82d9ac16","Type":"ContainerStarted","Data":"02a5bb008889ce861a7074791e4fd3f5cea7fcf5f15de95e1b0b20b4eaf9cf97"} Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.960747 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqbsj\" (UniqueName: \"kubernetes.io/projected/a3f8d529-59a4-4ae7-a24f-f6b9d1881a70-kube-api-access-rqbsj\") pod \"multus-admission-controller-857f4d67dd-r7wht\" (UID: \"a3f8d529-59a4-4ae7-a24f-f6b9d1881a70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r7wht" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.969131 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" event={"ID":"c158e403-ca5b-4c91-99b7-b530be24cd62","Type":"ContainerStarted","Data":"cd5bbcc5e57053c967b5889bc78f3d1c68b29c8174024859616b32cbe8143e1d"} Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.969159 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" event={"ID":"c158e403-ca5b-4c91-99b7-b530be24cd62","Type":"ContainerStarted","Data":"3ba00b45007fc74cb744988677fc2a3e7436c3483d5f16f6a9b0b39dea46f8d3"} Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.973088 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" event={"ID":"1addd1ce-f5a7-4a84-86c9-6530113a1ea2","Type":"ContainerStarted","Data":"4435b6bafa3d9475821fd3f84e9f26e0c5e5d5be2103d03c4a098a70219b766b"} Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.985679 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq9zn\" (UniqueName: \"kubernetes.io/projected/a6316d49-679c-4393-bae7-5ca27876cce4-kube-api-access-wq9zn\") pod \"marketplace-operator-79b997595-fnp2w\" (UID: \"a6316d49-679c-4393-bae7-5ca27876cce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:22 crc kubenswrapper[4814]: I1202 14:31:22.990032 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-2kv5r"] Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:22.998548 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fxs2g" event={"ID":"8720fabc-937c-4b06-a3de-bb2719b71d1e","Type":"ContainerStarted","Data":"805451a17e282a6da7bc4e415e3c0da347c83dc3877d25daec12b56b63e8e5d4"} Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.001296 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fxs2g" event={"ID":"8720fabc-937c-4b06-a3de-bb2719b71d1e","Type":"ContainerStarted","Data":"d4ca12e026328081f725d86377bbe71774d920c3f51a98e75626443513119007"} Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.014538 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8mxvp" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.028337 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz"] Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.033124 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.035624 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" event={"ID":"827fd14f-134f-46a7-8592-ad91bbb05510","Type":"ContainerStarted","Data":"d77c8d6e51a106da2ea8a51393fd2e04f4a312a964bd0d9aad7cde15826f0454"} Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.035667 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" event={"ID":"827fd14f-134f-46a7-8592-ad91bbb05510","Type":"ContainerStarted","Data":"12d998b907d4aab2f8a5ee8e5d9410daad8fdead1e63d44cfa9e44cd0e4bea06"} Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.036059 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.050631 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.058535 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.059743 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6a507d1f-367a-4ad8-b436-592c077be1f1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.059784 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5fcn\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-kube-api-access-w5fcn\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.059855 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6a507d1f-367a-4ad8-b436-592c077be1f1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.060006 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a507d1f-367a-4ad8-b436-592c077be1f1-trusted-ca\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.060043 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4811cc2a-adc3-4fb7-8f09-d9426a60fffc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zqrmh\" (UID: \"4811cc2a-adc3-4fb7-8f09-d9426a60fffc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zqrmh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.060161 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.060303 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6a507d1f-367a-4ad8-b436-592c077be1f1-registry-certificates\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.060380 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-registry-tls\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.060406 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w4d6\" (UniqueName: \"kubernetes.io/projected/4811cc2a-adc3-4fb7-8f09-d9426a60fffc-kube-api-access-2w4d6\") pod \"control-plane-machine-set-operator-78cbb6b69f-zqrmh\" (UID: \"4811cc2a-adc3-4fb7-8f09-d9426a60fffc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zqrmh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.060447 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-bound-sa-token\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: E1202 14:31:23.067022 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:23.567001035 +0000 UTC m=+145.691363239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.071968 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.078436 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.078864 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc"] Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.086493 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-r7wht" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.087376 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" event={"ID":"79a75184-8ba4-400f-9113-d88426227339","Type":"ContainerStarted","Data":"9b7017fdf00aee68ec327bc85cdc6f7d446d4073ad98a63be5c6abc310e84d96"} Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.093538 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.098152 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.114131 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" event={"ID":"588890a2-6538-4f1e-b102-24efd0c91bb8","Type":"ContainerStarted","Data":"2484fd4070f10d5db32ab5524322a9eec8619413ad2c07638148f2bcb986b3e5"} Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.144784 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" event={"ID":"be857bca-e5ac-4d2f-8f62-8ab639f08d51","Type":"ContainerStarted","Data":"d2404936c8cd90ac40fab08961892891847826a7252330751a9def5df79572c9"} Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.144833 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" event={"ID":"be857bca-e5ac-4d2f-8f62-8ab639f08d51","Type":"ContainerStarted","Data":"aaff7296d60e3e6f300437a7f5a29d32e9378e7c643840a1e05a286aa8dcc12f"} Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.146407 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tn4k9" event={"ID":"c8737e3b-a2b1-4b23-90bc-5dc758823ae9","Type":"ContainerStarted","Data":"6a9897bcd5ccdcfb0dd13432ea878b1833fc7fc1d557b18b21cb2a5636ca9985"} Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.149258 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.161196 4814 generic.go:334] "Generic (PLEG): container finished" podID="e5f2f2e4-5c34-410d-8f03-1b34ac392176" containerID="190441ef7bc244e549dfcf15d9b26172f946c1a699bfc3169f07892b33e72656" exitCode=0 Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.161212 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.161260 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" event={"ID":"e5f2f2e4-5c34-410d-8f03-1b34ac392176","Type":"ContainerDied","Data":"190441ef7bc244e549dfcf15d9b26172f946c1a699bfc3169f07892b33e72656"} Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.161487 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6a507d1f-367a-4ad8-b436-592c077be1f1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.161530 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5fcn\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-kube-api-access-w5fcn\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.161634 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcx7w\" (UniqueName: \"kubernetes.io/projected/5e3867d8-bafc-4b43-ac79-9cbc271baaa6-kube-api-access-hcx7w\") pod \"olm-operator-6b444d44fb-nbcdh\" (UID: \"5e3867d8-bafc-4b43-ac79-9cbc271baaa6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.161663 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6a507d1f-367a-4ad8-b436-592c077be1f1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.161717 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-mountpoint-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.161804 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79tgf\" (UniqueName: \"kubernetes.io/projected/6f82c67c-0abb-41a5-95eb-533ca424d192-kube-api-access-79tgf\") pod \"ingress-canary-9fx8s\" (UID: \"6f82c67c-0abb-41a5-95eb-533ca424d192\") " pod="openshift-ingress-canary/ingress-canary-9fx8s" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.161945 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25faedf3-ec5c-42cb-804c-15fdfc7ac152-config\") pod \"machine-api-operator-5694c8668f-k65hc\" (UID: \"25faedf3-ec5c-42cb-804c-15fdfc7ac152\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.162146 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a507d1f-367a-4ad8-b436-592c077be1f1-trusted-ca\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.162176 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4811cc2a-adc3-4fb7-8f09-d9426a60fffc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zqrmh\" (UID: \"4811cc2a-adc3-4fb7-8f09-d9426a60fffc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zqrmh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.162217 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/25faedf3-ec5c-42cb-804c-15fdfc7ac152-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-k65hc\" (UID: \"25faedf3-ec5c-42cb-804c-15fdfc7ac152\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.162293 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f82c67c-0abb-41a5-95eb-533ca424d192-cert\") pod \"ingress-canary-9fx8s\" (UID: \"6f82c67c-0abb-41a5-95eb-533ca424d192\") " pod="openshift-ingress-canary/ingress-canary-9fx8s" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.162392 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vvxm\" (UniqueName: \"kubernetes.io/projected/be15a2d4-290c-45de-a493-4161afdde26f-kube-api-access-5vvxm\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.162487 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5e3867d8-bafc-4b43-ac79-9cbc271baaa6-srv-cert\") pod \"olm-operator-6b444d44fb-nbcdh\" (UID: \"5e3867d8-bafc-4b43-ac79-9cbc271baaa6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.162550 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9410a720-183e-4f95-adb4-3ec7aaeb03e0-config-volume\") pod \"dns-default-mx8k5\" (UID: \"9410a720-183e-4f95-adb4-3ec7aaeb03e0\") " pod="openshift-dns/dns-default-mx8k5" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.162613 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/25faedf3-ec5c-42cb-804c-15fdfc7ac152-images\") pod \"machine-api-operator-5694c8668f-k65hc\" (UID: \"25faedf3-ec5c-42cb-804c-15fdfc7ac152\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.162639 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrg2g\" (UniqueName: \"kubernetes.io/projected/9410a720-183e-4f95-adb4-3ec7aaeb03e0-kube-api-access-qrg2g\") pod \"dns-default-mx8k5\" (UID: \"9410a720-183e-4f95-adb4-3ec7aaeb03e0\") " pod="openshift-dns/dns-default-mx8k5" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.162682 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-socket-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.167017 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5e3867d8-bafc-4b43-ac79-9cbc271baaa6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nbcdh\" (UID: \"5e3867d8-bafc-4b43-ac79-9cbc271baaa6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.167812 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.168409 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a507d1f-367a-4ad8-b436-592c077be1f1-trusted-ca\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.169245 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-registration-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.176149 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4811cc2a-adc3-4fb7-8f09-d9426a60fffc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zqrmh\" (UID: \"4811cc2a-adc3-4fb7-8f09-d9426a60fffc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zqrmh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.169478 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6a507d1f-367a-4ad8-b436-592c077be1f1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.170112 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6a507d1f-367a-4ad8-b436-592c077be1f1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: E1202 14:31:23.169319 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:23.669291439 +0000 UTC m=+145.793653813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.180617 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-plugins-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.180929 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpm2z\" (UniqueName: \"kubernetes.io/projected/25faedf3-ec5c-42cb-804c-15fdfc7ac152-kube-api-access-lpm2z\") pod \"machine-api-operator-5694c8668f-k65hc\" (UID: \"25faedf3-ec5c-42cb-804c-15fdfc7ac152\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.182542 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-csi-data-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.183431 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6a507d1f-367a-4ad8-b436-592c077be1f1-registry-certificates\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.184053 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-registry-tls\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.185536 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/270b826a-775d-4172-9c7e-5a12320a5046-node-bootstrap-token\") pod \"machine-config-server-5gr2f\" (UID: \"270b826a-775d-4172-9c7e-5a12320a5046\") " pod="openshift-machine-config-operator/machine-config-server-5gr2f" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.186298 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5"] Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.186546 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w4d6\" (UniqueName: \"kubernetes.io/projected/4811cc2a-adc3-4fb7-8f09-d9426a60fffc-kube-api-access-2w4d6\") pod \"control-plane-machine-set-operator-78cbb6b69f-zqrmh\" (UID: \"4811cc2a-adc3-4fb7-8f09-d9426a60fffc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zqrmh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.187185 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9410a720-183e-4f95-adb4-3ec7aaeb03e0-metrics-tls\") pod \"dns-default-mx8k5\" (UID: \"9410a720-183e-4f95-adb4-3ec7aaeb03e0\") " pod="openshift-dns/dns-default-mx8k5" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.187331 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-bound-sa-token\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.187423 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/270b826a-775d-4172-9c7e-5a12320a5046-certs\") pod \"machine-config-server-5gr2f\" (UID: \"270b826a-775d-4172-9c7e-5a12320a5046\") " pod="openshift-machine-config-operator/machine-config-server-5gr2f" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.187578 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxkfm\" (UniqueName: \"kubernetes.io/projected/270b826a-775d-4172-9c7e-5a12320a5046-kube-api-access-mxkfm\") pod \"machine-config-server-5gr2f\" (UID: \"270b826a-775d-4172-9c7e-5a12320a5046\") " pod="openshift-machine-config-operator/machine-config-server-5gr2f" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.191998 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-registry-tls\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.194744 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6a507d1f-367a-4ad8-b436-592c077be1f1-registry-certificates\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.217154 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5fcn\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-kube-api-access-w5fcn\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.249727 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w4d6\" (UniqueName: \"kubernetes.io/projected/4811cc2a-adc3-4fb7-8f09-d9426a60fffc-kube-api-access-2w4d6\") pod \"control-plane-machine-set-operator-78cbb6b69f-zqrmh\" (UID: \"4811cc2a-adc3-4fb7-8f09-d9426a60fffc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zqrmh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.264363 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-bound-sa-token\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.291780 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/270b826a-775d-4172-9c7e-5a12320a5046-node-bootstrap-token\") pod \"machine-config-server-5gr2f\" (UID: \"270b826a-775d-4172-9c7e-5a12320a5046\") " pod="openshift-machine-config-operator/machine-config-server-5gr2f" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.292080 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9410a720-183e-4f95-adb4-3ec7aaeb03e0-metrics-tls\") pod \"dns-default-mx8k5\" (UID: \"9410a720-183e-4f95-adb4-3ec7aaeb03e0\") " pod="openshift-dns/dns-default-mx8k5" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.296539 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/270b826a-775d-4172-9c7e-5a12320a5046-certs\") pod \"machine-config-server-5gr2f\" (UID: \"270b826a-775d-4172-9c7e-5a12320a5046\") " pod="openshift-machine-config-operator/machine-config-server-5gr2f" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.296664 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxkfm\" (UniqueName: \"kubernetes.io/projected/270b826a-775d-4172-9c7e-5a12320a5046-kube-api-access-mxkfm\") pod \"machine-config-server-5gr2f\" (UID: \"270b826a-775d-4172-9c7e-5a12320a5046\") " pod="openshift-machine-config-operator/machine-config-server-5gr2f" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.296831 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcx7w\" (UniqueName: \"kubernetes.io/projected/5e3867d8-bafc-4b43-ac79-9cbc271baaa6-kube-api-access-hcx7w\") pod \"olm-operator-6b444d44fb-nbcdh\" (UID: \"5e3867d8-bafc-4b43-ac79-9cbc271baaa6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.296947 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-mountpoint-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.297045 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79tgf\" (UniqueName: \"kubernetes.io/projected/6f82c67c-0abb-41a5-95eb-533ca424d192-kube-api-access-79tgf\") pod \"ingress-canary-9fx8s\" (UID: \"6f82c67c-0abb-41a5-95eb-533ca424d192\") " pod="openshift-ingress-canary/ingress-canary-9fx8s" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.297141 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25faedf3-ec5c-42cb-804c-15fdfc7ac152-config\") pod \"machine-api-operator-5694c8668f-k65hc\" (UID: \"25faedf3-ec5c-42cb-804c-15fdfc7ac152\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.297330 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/25faedf3-ec5c-42cb-804c-15fdfc7ac152-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-k65hc\" (UID: \"25faedf3-ec5c-42cb-804c-15fdfc7ac152\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.297432 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f82c67c-0abb-41a5-95eb-533ca424d192-cert\") pod \"ingress-canary-9fx8s\" (UID: \"6f82c67c-0abb-41a5-95eb-533ca424d192\") " pod="openshift-ingress-canary/ingress-canary-9fx8s" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.297542 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vvxm\" (UniqueName: \"kubernetes.io/projected/be15a2d4-290c-45de-a493-4161afdde26f-kube-api-access-5vvxm\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.297652 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5e3867d8-bafc-4b43-ac79-9cbc271baaa6-srv-cert\") pod \"olm-operator-6b444d44fb-nbcdh\" (UID: \"5e3867d8-bafc-4b43-ac79-9cbc271baaa6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.297778 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9410a720-183e-4f95-adb4-3ec7aaeb03e0-config-volume\") pod \"dns-default-mx8k5\" (UID: \"9410a720-183e-4f95-adb4-3ec7aaeb03e0\") " pod="openshift-dns/dns-default-mx8k5" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.297892 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/25faedf3-ec5c-42cb-804c-15fdfc7ac152-images\") pod \"machine-api-operator-5694c8668f-k65hc\" (UID: \"25faedf3-ec5c-42cb-804c-15fdfc7ac152\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.297990 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrg2g\" (UniqueName: \"kubernetes.io/projected/9410a720-183e-4f95-adb4-3ec7aaeb03e0-kube-api-access-qrg2g\") pod \"dns-default-mx8k5\" (UID: \"9410a720-183e-4f95-adb4-3ec7aaeb03e0\") " pod="openshift-dns/dns-default-mx8k5" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.298084 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-socket-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.298187 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5e3867d8-bafc-4b43-ac79-9cbc271baaa6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nbcdh\" (UID: \"5e3867d8-bafc-4b43-ac79-9cbc271baaa6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.298316 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.298443 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-registration-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.298547 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-plugins-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.298636 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpm2z\" (UniqueName: \"kubernetes.io/projected/25faedf3-ec5c-42cb-804c-15fdfc7ac152-kube-api-access-lpm2z\") pod \"machine-api-operator-5694c8668f-k65hc\" (UID: \"25faedf3-ec5c-42cb-804c-15fdfc7ac152\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.298737 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-csi-data-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.298916 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-csi-data-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.300053 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25faedf3-ec5c-42cb-804c-15fdfc7ac152-config\") pod \"machine-api-operator-5694c8668f-k65hc\" (UID: \"25faedf3-ec5c-42cb-804c-15fdfc7ac152\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.300719 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-socket-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.300794 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/25faedf3-ec5c-42cb-804c-15fdfc7ac152-images\") pod \"machine-api-operator-5694c8668f-k65hc\" (UID: \"25faedf3-ec5c-42cb-804c-15fdfc7ac152\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.301895 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-mountpoint-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.302734 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-plugins-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.302818 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/be15a2d4-290c-45de-a493-4161afdde26f-registration-dir\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: E1202 14:31:23.302846 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:23.802825395 +0000 UTC m=+145.927187599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.308402 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5e3867d8-bafc-4b43-ac79-9cbc271baaa6-srv-cert\") pod \"olm-operator-6b444d44fb-nbcdh\" (UID: \"5e3867d8-bafc-4b43-ac79-9cbc271baaa6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.308806 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9410a720-183e-4f95-adb4-3ec7aaeb03e0-config-volume\") pod \"dns-default-mx8k5\" (UID: \"9410a720-183e-4f95-adb4-3ec7aaeb03e0\") " pod="openshift-dns/dns-default-mx8k5" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.310799 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/270b826a-775d-4172-9c7e-5a12320a5046-certs\") pod \"machine-config-server-5gr2f\" (UID: \"270b826a-775d-4172-9c7e-5a12320a5046\") " pod="openshift-machine-config-operator/machine-config-server-5gr2f" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.313302 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/25faedf3-ec5c-42cb-804c-15fdfc7ac152-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-k65hc\" (UID: \"25faedf3-ec5c-42cb-804c-15fdfc7ac152\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.317331 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f82c67c-0abb-41a5-95eb-533ca424d192-cert\") pod \"ingress-canary-9fx8s\" (UID: \"6f82c67c-0abb-41a5-95eb-533ca424d192\") " pod="openshift-ingress-canary/ingress-canary-9fx8s" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.329600 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zqrmh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.332592 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/270b826a-775d-4172-9c7e-5a12320a5046-node-bootstrap-token\") pod \"machine-config-server-5gr2f\" (UID: \"270b826a-775d-4172-9c7e-5a12320a5046\") " pod="openshift-machine-config-operator/machine-config-server-5gr2f" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.333176 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9410a720-183e-4f95-adb4-3ec7aaeb03e0-metrics-tls\") pod \"dns-default-mx8k5\" (UID: \"9410a720-183e-4f95-adb4-3ec7aaeb03e0\") " pod="openshift-dns/dns-default-mx8k5" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.335353 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5e3867d8-bafc-4b43-ac79-9cbc271baaa6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nbcdh\" (UID: \"5e3867d8-bafc-4b43-ac79-9cbc271baaa6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.339201 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79tgf\" (UniqueName: \"kubernetes.io/projected/6f82c67c-0abb-41a5-95eb-533ca424d192-kube-api-access-79tgf\") pod \"ingress-canary-9fx8s\" (UID: \"6f82c67c-0abb-41a5-95eb-533ca424d192\") " pod="openshift-ingress-canary/ingress-canary-9fx8s" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.390714 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxkfm\" (UniqueName: \"kubernetes.io/projected/270b826a-775d-4172-9c7e-5a12320a5046-kube-api-access-mxkfm\") pod \"machine-config-server-5gr2f\" (UID: \"270b826a-775d-4172-9c7e-5a12320a5046\") " pod="openshift-machine-config-operator/machine-config-server-5gr2f" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.399833 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:23 crc kubenswrapper[4814]: E1202 14:31:23.400481 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:23.90045349 +0000 UTC m=+146.024815704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.427270 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5gr2f" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.434886 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrg2g\" (UniqueName: \"kubernetes.io/projected/9410a720-183e-4f95-adb4-3ec7aaeb03e0-kube-api-access-qrg2g\") pod \"dns-default-mx8k5\" (UID: \"9410a720-183e-4f95-adb4-3ec7aaeb03e0\") " pod="openshift-dns/dns-default-mx8k5" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.446758 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9fx8s" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.453908 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vvxm\" (UniqueName: \"kubernetes.io/projected/be15a2d4-290c-45de-a493-4161afdde26f-kube-api-access-5vvxm\") pod \"csi-hostpathplugin-lx28q\" (UID: \"be15a2d4-290c-45de-a493-4161afdde26f\") " pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.462794 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcx7w\" (UniqueName: \"kubernetes.io/projected/5e3867d8-bafc-4b43-ac79-9cbc271baaa6-kube-api-access-hcx7w\") pod \"olm-operator-6b444d44fb-nbcdh\" (UID: \"5e3867d8-bafc-4b43-ac79-9cbc271baaa6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.462969 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpm2z\" (UniqueName: \"kubernetes.io/projected/25faedf3-ec5c-42cb-804c-15fdfc7ac152-kube-api-access-lpm2z\") pod \"machine-api-operator-5694c8668f-k65hc\" (UID: \"25faedf3-ec5c-42cb-804c-15fdfc7ac152\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.482575 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lx28q" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.483264 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mx8k5" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.504743 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: E1202 14:31:23.506303 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.006286106 +0000 UTC m=+146.130648310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.569520 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz"] Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.606854 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:23 crc kubenswrapper[4814]: E1202 14:31:23.607253 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.107222286 +0000 UTC m=+146.231584490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.717129 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.718655 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.719221 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: E1202 14:31:23.719713 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.219694212 +0000 UTC m=+146.344056416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.828140 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:23 crc kubenswrapper[4814]: E1202 14:31:23.828990 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.328970815 +0000 UTC m=+146.453333019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:23 crc kubenswrapper[4814]: W1202 14:31:23.896423 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8584026b_8df2_4dca_9a5b_ee9a1fe36808.slice/crio-388d0c399634b13dd95f1c3a7d449d7384f992862c07fad74213367deaabbb6c WatchSource:0}: Error finding container 388d0c399634b13dd95f1c3a7d449d7384f992862c07fad74213367deaabbb6c: Status 404 returned error can't find the container with id 388d0c399634b13dd95f1c3a7d449d7384f992862c07fad74213367deaabbb6c Dec 02 14:31:23 crc kubenswrapper[4814]: I1202 14:31:23.932921 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:23 crc kubenswrapper[4814]: E1202 14:31:23.933502 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.433480177 +0000 UTC m=+146.557842381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.034093 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:24 crc kubenswrapper[4814]: E1202 14:31:24.035942 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.535912186 +0000 UTC m=+146.660274390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.138273 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:24 crc kubenswrapper[4814]: E1202 14:31:24.138904 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.638872138 +0000 UTC m=+146.763234352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.155446 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" podStartSLOduration=123.155421672 podStartE2EDuration="2m3.155421672s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:24.10894031 +0000 UTC m=+146.233302514" watchObservedRunningTime="2025-12-02 14:31:24.155421672 +0000 UTC m=+146.279783876" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.216356 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hsxbt" podStartSLOduration=123.216332045 podStartE2EDuration="2m3.216332045s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:24.212769164 +0000 UTC m=+146.337131368" watchObservedRunningTime="2025-12-02 14:31:24.216332045 +0000 UTC m=+146.340694249" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.240814 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:24 crc kubenswrapper[4814]: E1202 14:31:24.241273 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.741252565 +0000 UTC m=+146.865614769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.246615 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz" event={"ID":"0a325a62-edee-4944-8cd5-5e74d2e8d822","Type":"ContainerStarted","Data":"0655b01dc39a2a018c5caffd6d7b5a195612ee3796a991ce8e6f0d42de816311"} Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.266196 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" event={"ID":"79a75184-8ba4-400f-9113-d88426227339","Type":"ContainerStarted","Data":"a24e4a29601ed4fa443d33b63d832c9ac913ab04ae3085e2296b0872c7632594"} Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.273284 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz" event={"ID":"8584026b-8df2-4dca-9a5b-ee9a1fe36808","Type":"ContainerStarted","Data":"388d0c399634b13dd95f1c3a7d449d7384f992862c07fad74213367deaabbb6c"} Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.276594 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" event={"ID":"050c15f4-8e90-4961-bb14-eaa0aa868b8c","Type":"ContainerStarted","Data":"89a9784bdfb52b87c600d4183fc9a5398ce0e780343d0b7d9f82fa24ccccd78f"} Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.279211 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" event={"ID":"c158e403-ca5b-4c91-99b7-b530be24cd62","Type":"ContainerStarted","Data":"1256d9e3c798ed670711bc818823929177d093410a1dc1248b9f5123cc6823d3"} Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.280901 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj" event={"ID":"8725a2f3-4453-49c6-a55e-2ec80acf982d","Type":"ContainerStarted","Data":"0576f6908691a2f22bef41fd85196dfddb51b7ab63b95002111ab449a02c8098"} Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.280928 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj" event={"ID":"8725a2f3-4453-49c6-a55e-2ec80acf982d","Type":"ContainerStarted","Data":"d48b1363ea493f4b6b73f42f23058addd3f6317d0699ba705074dc1dd4026011"} Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.297688 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" event={"ID":"153235f2-4506-45d4-801f-8c96d28df313","Type":"ContainerStarted","Data":"9ec3f2cc83e11b5ba4b50be11d3a10ada12d834867210bd3a217d8760cdd569b"} Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.328370 4814 generic.go:334] "Generic (PLEG): container finished" podID="1addd1ce-f5a7-4a84-86c9-6530113a1ea2" containerID="1ff512eb858b97b3b1ee204f5baa75d0bfc48dda94671edd648e369b9226145c" exitCode=0 Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.329132 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" event={"ID":"1addd1ce-f5a7-4a84-86c9-6530113a1ea2","Type":"ContainerDied","Data":"1ff512eb858b97b3b1ee204f5baa75d0bfc48dda94671edd648e369b9226145c"} Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.343158 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.343212 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.343299 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.343327 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.343349 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:24 crc kubenswrapper[4814]: E1202 14:31:24.346593 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.846577727 +0000 UTC m=+146.970939931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.347782 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.354656 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.364417 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.378109 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.427742 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5gr2f" event={"ID":"270b826a-775d-4172-9c7e-5a12320a5046","Type":"ContainerStarted","Data":"caff9ef1fb6092bfea695f90a2a56d91d3c9dd51aa8fc451612963bffa1b4f32"} Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.439581 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" event={"ID":"588890a2-6538-4f1e-b102-24efd0c91bb8","Type":"ContainerStarted","Data":"cc119d04bc603e4e9b8d785450039b2c01d28b5d13aea476145ce8aa07090bc2"} Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.440240 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.444792 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.448584 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:24 crc kubenswrapper[4814]: E1202 14:31:24.449562 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:24.949538399 +0000 UTC m=+147.073900603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.468701 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qbj7r" event={"ID":"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686","Type":"ContainerStarted","Data":"cd569819dd9e6a6126edf8d62fb847aa9143879c388a9b1da05760d94f72bd3a"} Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.470612 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.488445 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.504882 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" event={"ID":"1b5c70a5-db94-4e6e-b18d-a73ab6274154","Type":"ContainerStarted","Data":"4f5c9bd0c8b69d7d15c624ec8b4465a1357f1d5c369b2fd4087f77db5cbf0416"} Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.551836 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:24 crc kubenswrapper[4814]: E1202 14:31:24.552346 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:25.052322717 +0000 UTC m=+147.176684921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.659970 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:24 crc kubenswrapper[4814]: E1202 14:31:24.661826 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:25.161806646 +0000 UTC m=+147.286168850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.762791 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:24 crc kubenswrapper[4814]: E1202 14:31:24.763818 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:25.263798463 +0000 UTC m=+147.388160667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.870320 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:24 crc kubenswrapper[4814]: E1202 14:31:24.870774 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:25.370754047 +0000 UTC m=+147.495116241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.871576 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:31:24 crc kubenswrapper[4814]: I1202 14:31:24.973056 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:24 crc kubenswrapper[4814]: E1202 14:31:24.974094 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:25.474074378 +0000 UTC m=+147.598436582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.026412 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tq8bc" podStartSLOduration=124.026389521 podStartE2EDuration="2m4.026389521s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:25.011099788 +0000 UTC m=+147.135461992" watchObservedRunningTime="2025-12-02 14:31:25.026389521 +0000 UTC m=+147.150751725" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.065614 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" podStartSLOduration=123.065590147 podStartE2EDuration="2m3.065590147s" podCreationTimestamp="2025-12-02 14:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:25.058603437 +0000 UTC m=+147.182965641" watchObservedRunningTime="2025-12-02 14:31:25.065590147 +0000 UTC m=+147.189952351" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.074190 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:25 crc kubenswrapper[4814]: E1202 14:31:25.074947 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:25.574918676 +0000 UTC m=+147.699280880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.089778 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qgjwj" podStartSLOduration=124.089748517 podStartE2EDuration="2m4.089748517s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:25.086604746 +0000 UTC m=+147.210966950" watchObservedRunningTime="2025-12-02 14:31:25.089748517 +0000 UTC m=+147.214110741" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.153281 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7nc5" podStartSLOduration=124.153254726 podStartE2EDuration="2m4.153254726s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:25.152542078 +0000 UTC m=+147.276904282" watchObservedRunningTime="2025-12-02 14:31:25.153254726 +0000 UTC m=+147.277616930" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.175772 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:25 crc kubenswrapper[4814]: E1202 14:31:25.176142 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:25.676127993 +0000 UTC m=+147.800490187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.277565 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:25 crc kubenswrapper[4814]: E1202 14:31:25.279420 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:25.779398083 +0000 UTC m=+147.903760287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.384658 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:25 crc kubenswrapper[4814]: E1202 14:31:25.385216 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:25.885203908 +0000 UTC m=+148.009566112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.495928 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:25 crc kubenswrapper[4814]: E1202 14:31:25.496836 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:25.996813242 +0000 UTC m=+148.121175446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.579204 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" event={"ID":"153235f2-4506-45d4-801f-8c96d28df313","Type":"ContainerStarted","Data":"b733d524b45a93eb6307fdfb16e2d3ebb01b1be9a4a767e10ca636fd5be9454c"} Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.598187 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:25 crc kubenswrapper[4814]: E1202 14:31:25.598628 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:26.098612114 +0000 UTC m=+148.222974318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.604847 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv"] Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.612859 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-794sp"] Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.628439 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qbj7r" event={"ID":"9f6f57f3-b8c7-4c71-b7bc-627e6ba30686","Type":"ContainerStarted","Data":"64814face72285f9a2848290a5f43368b77899476432d04994e798b88cc27396"} Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.659901 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fxs2g" event={"ID":"8720fabc-937c-4b06-a3de-bb2719b71d1e","Type":"ContainerStarted","Data":"c09c0abe62313d085b9b36d370c36b7a6c73e467a97aabdb7926b147dfc94cfd"} Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.669619 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qbj7r" podStartSLOduration=124.669594165 podStartE2EDuration="2m4.669594165s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:25.667736207 +0000 UTC m=+147.792098411" watchObservedRunningTime="2025-12-02 14:31:25.669594165 +0000 UTC m=+147.793956379" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.670976 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-2kv5r" podStartSLOduration=124.67097037 podStartE2EDuration="2m4.67097037s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:25.640022086 +0000 UTC m=+147.764384290" watchObservedRunningTime="2025-12-02 14:31:25.67097037 +0000 UTC m=+147.795332574" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.682456 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5gr2f" event={"ID":"270b826a-775d-4172-9c7e-5a12320a5046","Type":"ContainerStarted","Data":"6258aa8672deea06e3d92a52c9296f21c8ef36c2896e4e0f37c74706a11259de"} Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.687523 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tn4k9" event={"ID":"c8737e3b-a2b1-4b23-90bc-5dc758823ae9","Type":"ContainerStarted","Data":"e08783b351593c4f9158c023c30861faa229433b5d764cafaedd5ad1a0071351"} Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.689914 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tn4k9" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.699693 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:25 crc kubenswrapper[4814]: E1202 14:31:25.700481 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:26.200447737 +0000 UTC m=+148.324809941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.703535 4814 patch_prober.go:28] interesting pod/downloads-7954f5f757-tn4k9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.703584 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tn4k9" podUID="c8737e3b-a2b1-4b23-90bc-5dc758823ae9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.709150 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww"] Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.719542 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-fxs2g" podStartSLOduration=124.719514286 podStartE2EDuration="2m4.719514286s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:25.693800446 +0000 UTC m=+147.818162650" watchObservedRunningTime="2025-12-02 14:31:25.719514286 +0000 UTC m=+147.843876490" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.720327 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4"] Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.720381 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" event={"ID":"e5f2f2e4-5c34-410d-8f03-1b34ac392176","Type":"ContainerStarted","Data":"8095393bf9ced194bfe7ff93e6cad2b28f4a4774140a99021dd8a3df02158983"} Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.720585 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.731524 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tn4k9" podStartSLOduration=124.731505584 podStartE2EDuration="2m4.731505584s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:25.730252291 +0000 UTC m=+147.854614495" watchObservedRunningTime="2025-12-02 14:31:25.731505584 +0000 UTC m=+147.855867788" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.741490 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn"] Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.741768 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz" event={"ID":"0a325a62-edee-4944-8cd5-5e74d2e8d822","Type":"ContainerStarted","Data":"2a4043e4ea38fb3d61ae49d4727117bf0a9d6b00433868485a4a0a4e99b40208"} Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.766892 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5gr2f" podStartSLOduration=5.766851621 podStartE2EDuration="5.766851621s" podCreationTimestamp="2025-12-02 14:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:25.758045665 +0000 UTC m=+147.882407869" watchObservedRunningTime="2025-12-02 14:31:25.766851621 +0000 UTC m=+147.891213815" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.783742 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5" event={"ID":"dd8c1623-2ec1-48f8-a92d-a6d160ee78d7","Type":"ContainerStarted","Data":"370778e5450512ae1d58f2f202a74652d5536c8ac3f66900a62b2b4f2b330c01"} Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.783802 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5" event={"ID":"dd8c1623-2ec1-48f8-a92d-a6d160ee78d7","Type":"ContainerStarted","Data":"ee64421fc82b93137ec6d67449d986e66a8f17e0aa2233365f027b746075cca6"} Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.790851 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" event={"ID":"0bf522e7-cf36-4600-812d-8f1c82d9ac16","Type":"ContainerStarted","Data":"fc567aa606b0d829308452e244275ee1aef34205df4c874cb9eb625ede51984a"} Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.802841 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:25 crc kubenswrapper[4814]: E1202 14:31:25.809347 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:26.309098255 +0000 UTC m=+148.433460459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.851016 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" event={"ID":"1b5c70a5-db94-4e6e-b18d-a73ab6274154","Type":"ContainerStarted","Data":"9657ef8b160e791cff6ff5b2087e39289e5b4571a9436a3f5ab194904a6d31cc"} Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.854175 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz" event={"ID":"8584026b-8df2-4dca-9a5b-ee9a1fe36808","Type":"ContainerStarted","Data":"f486eeb8185cee91f62e25ee62133a3d46e6e2f73e22a8b7164b06b21d0c788f"} Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.868043 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.870280 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bd5zz" podStartSLOduration=124.870222283 podStartE2EDuration="2m4.870222283s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:25.869874874 +0000 UTC m=+147.994237078" watchObservedRunningTime="2025-12-02 14:31:25.870222283 +0000 UTC m=+147.994584487" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.871097 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" podStartSLOduration=124.871088035 podStartE2EDuration="2m4.871088035s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:25.841938457 +0000 UTC m=+147.966300661" watchObservedRunningTime="2025-12-02 14:31:25.871088035 +0000 UTC m=+147.995450239" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.875074 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" event={"ID":"1addd1ce-f5a7-4a84-86c9-6530113a1ea2","Type":"ContainerStarted","Data":"c3d2347430f3d6016b808efa3d5403ce5d1c38d776768dd9e08e700daf1a6218"} Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.878687 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" event={"ID":"050c15f4-8e90-4961-bb14-eaa0aa868b8c","Type":"ContainerStarted","Data":"e538bdd7d67e6f1a4d48394bcfad569fa18442b89a21874e0cf1d4ba546f0a36"} Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.880291 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.905877 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:25 crc kubenswrapper[4814]: E1202 14:31:25.906017 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:26.405994861 +0000 UTC m=+148.530357065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:25 crc kubenswrapper[4814]: I1202 14:31:25.906314 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:25 crc kubenswrapper[4814]: E1202 14:31:25.909994 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:26.409973353 +0000 UTC m=+148.534335557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.007802 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:26 crc kubenswrapper[4814]: E1202 14:31:26.008380 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:26.508325717 +0000 UTC m=+148.632687921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.015304 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:26 crc kubenswrapper[4814]: E1202 14:31:26.031931 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:26.531914522 +0000 UTC m=+148.656276726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.039056 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.044507 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5" podStartSLOduration=125.044481044 podStartE2EDuration="2m5.044481044s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:26.018085677 +0000 UTC m=+148.142447881" watchObservedRunningTime="2025-12-02 14:31:26.044481044 +0000 UTC m=+148.168843248" Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.059474 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:26 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:26 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:26 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.059572 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.061246 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmxm4" podStartSLOduration=125.061216684 podStartE2EDuration="2m5.061216684s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:26.060187127 +0000 UTC m=+148.184549331" watchObservedRunningTime="2025-12-02 14:31:26.061216684 +0000 UTC m=+148.185578888" Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.102120 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" podStartSLOduration=125.102085242 podStartE2EDuration="2m5.102085242s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:26.092690711 +0000 UTC m=+148.217052935" watchObservedRunningTime="2025-12-02 14:31:26.102085242 +0000 UTC m=+148.226447446" Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.117944 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:26 crc kubenswrapper[4814]: E1202 14:31:26.118276 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:26.618241067 +0000 UTC m=+148.742603271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.121275 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.128727 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ch9nz" podStartSLOduration=125.128709556 podStartE2EDuration="2m5.128709556s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:26.127213467 +0000 UTC m=+148.251575691" watchObservedRunningTime="2025-12-02 14:31:26.128709556 +0000 UTC m=+148.253071760" Dec 02 14:31:26 crc kubenswrapper[4814]: E1202 14:31:26.130124 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:26.630083111 +0000 UTC m=+148.754445325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.196398 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.199818 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" podStartSLOduration=125.199788279 podStartE2EDuration="2m5.199788279s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:26.170314903 +0000 UTC m=+148.294677107" watchObservedRunningTime="2025-12-02 14:31:26.199788279 +0000 UTC m=+148.324150483" Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.209255 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b4rng"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.222630 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:26 crc kubenswrapper[4814]: E1202 14:31:26.223032 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:26.723009485 +0000 UTC m=+148.847371689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:26 crc kubenswrapper[4814]: W1202 14:31:26.229523 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1134e0d_63cf_4b7f_8f90_1d340c68d818.slice/crio-b0868529c2c880a4498fc42f2ee4e3ac9ca226e42aa5915a1af01525509ba8cf WatchSource:0}: Error finding container b0868529c2c880a4498fc42f2ee4e3ac9ca226e42aa5915a1af01525509ba8cf: Status 404 returned error can't find the container with id b0868529c2c880a4498fc42f2ee4e3ac9ca226e42aa5915a1af01525509ba8cf Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.339601 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:26 crc kubenswrapper[4814]: E1202 14:31:26.340553 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:26.840531151 +0000 UTC m=+148.964893355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.396877 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" podStartSLOduration=125.396856336 podStartE2EDuration="2m5.396856336s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:26.222073011 +0000 UTC m=+148.346435215" watchObservedRunningTime="2025-12-02 14:31:26.396856336 +0000 UTC m=+148.521218540" Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.398118 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-r7wht"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.441250 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.441854 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:26 crc kubenswrapper[4814]: E1202 14:31:26.442313 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:26.942282842 +0000 UTC m=+149.066645046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:26 crc kubenswrapper[4814]: W1202 14:31:26.493106 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3f8d529_59a4_4ae7_a24f_f6b9d1881a70.slice/crio-b5344747410be83a6f74c9e4ea91479f630a729a0ff4f5a55c26fcab08c8fdf0 WatchSource:0}: Error finding container b5344747410be83a6f74c9e4ea91479f630a729a0ff4f5a55c26fcab08c8fdf0: Status 404 returned error can't find the container with id b5344747410be83a6f74c9e4ea91479f630a729a0ff4f5a55c26fcab08c8fdf0 Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.533504 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8mxvp"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.543948 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:26 crc kubenswrapper[4814]: E1202 14:31:26.544296 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:27.044282079 +0000 UTC m=+149.168644283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.559829 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.595285 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.648362 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:26 crc kubenswrapper[4814]: E1202 14:31:26.648762 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:27.148739909 +0000 UTC m=+149.273102113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.660460 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.671398 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-868nw"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.687999 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9fx8s"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.697308 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vfvr6"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.746324 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-k65hc"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.748201 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zqrmh"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.750929 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:26 crc kubenswrapper[4814]: E1202 14:31:26.751353 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:27.251336031 +0000 UTC m=+149.375698235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.782143 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.792953 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pfbbr"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.852221 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mx8k5"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.852818 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:26 crc kubenswrapper[4814]: E1202 14:31:26.853384 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:27.353361819 +0000 UTC m=+149.477724023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.853426 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fnp2w"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.877840 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lx28q"] Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.934843 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9fx8s" event={"ID":"6f82c67c-0abb-41a5-95eb-533ca424d192","Type":"ContainerStarted","Data":"1031e63a6f24070f869f5b47d8efe4cdc4f785406752738e1f58a4685bf37eaf"} Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.954552 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:26 crc kubenswrapper[4814]: E1202 14:31:26.954960 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:27.454946895 +0000 UTC m=+149.579309099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.973402 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" event={"ID":"38511f71-ece2-4870-a0c7-22aa90dc0eb6","Type":"ContainerStarted","Data":"1f9fca7c90f2a2316077ff0c0154dfff1abab5236ceb19ad83381a2f71c22abd"} Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.981080 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" event={"ID":"19d86ed9-533c-4a11-a33b-74257f105a6e","Type":"ContainerStarted","Data":"ee9a875d1c7020e1e4ccda87f34eabf10b1c5ebf63e07770484b225c68bb02ad"} Dec 02 14:31:26 crc kubenswrapper[4814]: W1202 14:31:26.991086 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6316d49_679c_4393_bae7_5ca27876cce4.slice/crio-f8db3fe96068ac94336dddc2795788600bc0c47549af935dedc307400e392bec WatchSource:0}: Error finding container f8db3fe96068ac94336dddc2795788600bc0c47549af935dedc307400e392bec: Status 404 returned error can't find the container with id f8db3fe96068ac94336dddc2795788600bc0c47549af935dedc307400e392bec Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.992328 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww" event={"ID":"4ab31756-fb86-44da-b02b-54c45f173962","Type":"ContainerStarted","Data":"2d4ba03906fea13f24f7bdf88e5e2d58d3ffef867a4018d61cb687ca7fb1de4a"} Dec 02 14:31:26 crc kubenswrapper[4814]: I1202 14:31:26.992384 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww" event={"ID":"4ab31756-fb86-44da-b02b-54c45f173962","Type":"ContainerStarted","Data":"5ee27370b2825062b9eacee48585fb9b072a6aa0f7a6ce4098662e13ab1a4fd2"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.001274 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-b4rng" event={"ID":"664ee5fe-5761-497f-80d4-8b9a7059d09a","Type":"ContainerStarted","Data":"6b1b658ca8fcb968df2e31f0e848797a530d88f6c8e57f002da1e42e27030529"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.002759 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.038991 4814 patch_prober.go:28] interesting pod/console-operator-58897d9998-b4rng container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.039048 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-b4rng" podUID="664ee5fe-5761-497f-80d4-8b9a7059d09a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.041716 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" event={"ID":"25faedf3-ec5c-42cb-804c-15fdfc7ac152","Type":"ContainerStarted","Data":"1418e39bd5400a2f558ca3e14b45a9f1795bbcf4efbc797cb28952f1513b681e"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.054439 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-r7wht" event={"ID":"a3f8d529-59a4-4ae7-a24f-f6b9d1881a70","Type":"ContainerStarted","Data":"b5344747410be83a6f74c9e4ea91479f630a729a0ff4f5a55c26fcab08c8fdf0"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.060586 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-794sp" event={"ID":"9fa2a78f-22e9-4a5a-9224-aa9db469d17a","Type":"ContainerStarted","Data":"ada0d7617efd00cdedf1dd8054cb7203e6c32da8e2dedf2ac7a94b118b302261"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.060628 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-794sp" event={"ID":"9fa2a78f-22e9-4a5a-9224-aa9db469d17a","Type":"ContainerStarted","Data":"4ac1804bb587c342c49c16406a4a91d7ff8318e49600b4e9330168dd48289d93"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.062540 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:27 crc kubenswrapper[4814]: E1202 14:31:27.062703 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:27.56268719 +0000 UTC m=+149.687049394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.062879 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.063065 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5b95d0a3a425b92d4786ea9ec4d20ae6d60fad5f5a30bc0d84dc0f5a143ca79b"} Dec 02 14:31:27 crc kubenswrapper[4814]: E1202 14:31:27.063223 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:27.563215724 +0000 UTC m=+149.687577918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.067550 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:27 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:27 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:27 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.076508 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.079325 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" event={"ID":"d71727d2-331b-41f3-889e-71500d2003ae","Type":"ContainerStarted","Data":"cc41fd04165d50b94631dc655c2dfdaa906db8a2929ed976ba8731aef4689d28"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.090724 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r9tww" podStartSLOduration=126.090694869 podStartE2EDuration="2m6.090694869s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:27.036334744 +0000 UTC m=+149.160696948" watchObservedRunningTime="2025-12-02 14:31:27.090694869 +0000 UTC m=+149.215057073" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.091177 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-b4rng" podStartSLOduration=126.091171481 podStartE2EDuration="2m6.091171481s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:27.089328934 +0000 UTC m=+149.213691138" watchObservedRunningTime="2025-12-02 14:31:27.091171481 +0000 UTC m=+149.215533685" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.109596 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" event={"ID":"a19e92eb-a898-4589-9cff-652e92e2520e","Type":"ContainerStarted","Data":"f9f7dcf9045bd04a56093dac9f04dbb44515a0658887dd8a04711e8b33460b40"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.118553 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.119013 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.158775 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" event={"ID":"1addd1ce-f5a7-4a84-86c9-6530113a1ea2","Type":"ContainerStarted","Data":"bd77f02416e085b53c222a46ade1dfa5cfcb6dc55114453e90958bee2c6a2c29"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.165894 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:27 crc kubenswrapper[4814]: E1202 14:31:27.166456 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:27.666424462 +0000 UTC m=+149.790786676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.172527 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" event={"ID":"f4fcbcc9-c1bf-4735-ad50-e999ba4c9159","Type":"ContainerStarted","Data":"ca2b107e720a86720fac25e099656f94dafa4e6e5e881efd6707df156223c637"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.188011 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" event={"ID":"5e3867d8-bafc-4b43-ac79-9cbc271baaa6","Type":"ContainerStarted","Data":"7a6901951b888817f9e1ec48025106f050b9cbf3f86ee7262c8196de1656eeaa"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.204856 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-794sp" podStartSLOduration=126.204829767 podStartE2EDuration="2m6.204829767s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:27.195504188 +0000 UTC m=+149.319866392" watchObservedRunningTime="2025-12-02 14:31:27.204829767 +0000 UTC m=+149.329191971" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.212027 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" event={"ID":"d1134e0d-63cf-4b7f-8f90-1d340c68d818","Type":"ContainerStarted","Data":"b0868529c2c880a4498fc42f2ee4e3ac9ca226e42aa5915a1af01525509ba8cf"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.263894 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4" event={"ID":"584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df","Type":"ContainerStarted","Data":"02f7aa23c54989ddcadcd5c888de20a2704f768f6af7f4d9c0f386ca12460bf2"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.263945 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4" event={"ID":"584ddac6-ca1c-4d00-ae3d-6b6e9b0a32df","Type":"ContainerStarted","Data":"9bf12fc657834a9e97d36ef2c0f9f44941efe546a108abcc8625336ab3f61b6a"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.272940 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:27 crc kubenswrapper[4814]: E1202 14:31:27.273980 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:27.773963861 +0000 UTC m=+149.898326065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.285702 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zqrmh" event={"ID":"4811cc2a-adc3-4fb7-8f09-d9426a60fffc","Type":"ContainerStarted","Data":"7d1f3b0e77462e254026679dc61e5bf54551c99b03483fefe0f865b9b1ceacc1"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.292831 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-98tt4" podStartSLOduration=126.292796435 podStartE2EDuration="2m6.292796435s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:27.290570967 +0000 UTC m=+149.414933171" watchObservedRunningTime="2025-12-02 14:31:27.292796435 +0000 UTC m=+149.417158639" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.305628 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv" event={"ID":"bcdcf420-00a9-4e74-a62a-20422778b5e0","Type":"ContainerStarted","Data":"26bd7065151698cb3ac0775c2c4a0613873ea56d4bf0926cb816145ec7ecae7b"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.305713 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv" event={"ID":"bcdcf420-00a9-4e74-a62a-20422778b5e0","Type":"ContainerStarted","Data":"2c7eec9b5c29e7effe5e41c025eb1489566b77f49b91b958fab72e195704e634"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.351980 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jxnsc" event={"ID":"1b5c70a5-db94-4e6e-b18d-a73ab6274154","Type":"ContainerStarted","Data":"2adb7234f125f56082d7317e4782c8106f0b7e95a6a523a22a957f2b5b0f566c"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.369574 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8mxvp" event={"ID":"a784542d-74ab-4b13-90cb-9159a9dbc173","Type":"ContainerStarted","Data":"97efd53d9be9cabc6d90acaadabc96186328825141e54b14a2741e91763c61ba"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.373982 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:27 crc kubenswrapper[4814]: E1202 14:31:27.374289 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:27.874245434 +0000 UTC m=+149.998607638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.374572 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:27 crc kubenswrapper[4814]: E1202 14:31:27.377285 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:27.877270592 +0000 UTC m=+150.001632796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.387049 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xtgt5" event={"ID":"dd8c1623-2ec1-48f8-a92d-a6d160ee78d7","Type":"ContainerStarted","Data":"bac6fad60673f23dfd62da2854598601430ae7ed009d3b9d296febf281d0a631"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.415564 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" event={"ID":"fe138880-67a4-4c5a-ba3c-98af7246dcb5","Type":"ContainerStarted","Data":"6328e09d8e3e451f9e8bcbb78181891fa1be087d1537f3170176449ad1ed0790"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.415625 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" event={"ID":"fe138880-67a4-4c5a-ba3c-98af7246dcb5","Type":"ContainerStarted","Data":"debec3e9395900d576047a58008c8489f9334a7e32b69f0ecd44244a018b9f38"} Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.416701 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.417059 4814 patch_prober.go:28] interesting pod/downloads-7954f5f757-tn4k9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.417093 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tn4k9" podUID="c8737e3b-a2b1-4b23-90bc-5dc758823ae9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.448685 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.450379 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7vvdn" podStartSLOduration=125.450363328 podStartE2EDuration="2m5.450363328s" podCreationTimestamp="2025-12-02 14:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:27.448475369 +0000 UTC m=+149.572837573" watchObservedRunningTime="2025-12-02 14:31:27.450363328 +0000 UTC m=+149.574725522" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.450582 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h2zzv" podStartSLOduration=126.450576423 podStartE2EDuration="2m6.450576423s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:27.360775449 +0000 UTC m=+149.485137663" watchObservedRunningTime="2025-12-02 14:31:27.450576423 +0000 UTC m=+149.574938627" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.460179 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q8tkk" Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.477363 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:27 crc kubenswrapper[4814]: E1202 14:31:27.478961 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:27.978944131 +0000 UTC m=+150.103306345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.580666 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:27 crc kubenswrapper[4814]: E1202 14:31:27.586456 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:28.086434119 +0000 UTC m=+150.210796323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.682520 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:27 crc kubenswrapper[4814]: E1202 14:31:27.683442 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:28.183421828 +0000 UTC m=+150.307784032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.785214 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:27 crc kubenswrapper[4814]: E1202 14:31:27.785596 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:28.285583749 +0000 UTC m=+150.409945953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.886872 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:27 crc kubenswrapper[4814]: E1202 14:31:27.887443 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:28.387420542 +0000 UTC m=+150.511782756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.989094 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:27 crc kubenswrapper[4814]: E1202 14:31:27.989478 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:28.489465171 +0000 UTC m=+150.613827375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.994411 4814 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5vc2w container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 02 14:31:27 crc kubenswrapper[4814]: [+]log ok Dec 02 14:31:27 crc kubenswrapper[4814]: [+]etcd ok Dec 02 14:31:27 crc kubenswrapper[4814]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 02 14:31:27 crc kubenswrapper[4814]: [+]poststarthook/generic-apiserver-start-informers ok Dec 02 14:31:27 crc kubenswrapper[4814]: [+]poststarthook/max-in-flight-filter ok Dec 02 14:31:27 crc kubenswrapper[4814]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 02 14:31:27 crc kubenswrapper[4814]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 02 14:31:27 crc kubenswrapper[4814]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 02 14:31:27 crc kubenswrapper[4814]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 02 14:31:27 crc kubenswrapper[4814]: [+]poststarthook/project.openshift.io-projectcache ok Dec 02 14:31:27 crc kubenswrapper[4814]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 02 14:31:27 crc kubenswrapper[4814]: [+]poststarthook/openshift.io-startinformers ok Dec 02 14:31:27 crc kubenswrapper[4814]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 02 14:31:27 crc kubenswrapper[4814]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 02 14:31:27 crc kubenswrapper[4814]: livez check failed Dec 02 14:31:27 crc kubenswrapper[4814]: I1202 14:31:27.995099 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" podUID="1addd1ce-f5a7-4a84-86c9-6530113a1ea2" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.044883 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:28 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:28 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:28 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.044975 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.090672 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:28 crc kubenswrapper[4814]: E1202 14:31:28.091088 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:28.591065668 +0000 UTC m=+150.715427872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.192350 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:28 crc kubenswrapper[4814]: E1202 14:31:28.192927 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:28.692914211 +0000 UTC m=+150.817276415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.295755 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:28 crc kubenswrapper[4814]: E1202 14:31:28.296959 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:28.79694122 +0000 UTC m=+150.921303424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.401645 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:28 crc kubenswrapper[4814]: E1202 14:31:28.401975 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:28.901961875 +0000 UTC m=+151.026324079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.497448 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" event={"ID":"d71727d2-331b-41f3-889e-71500d2003ae","Type":"ContainerStarted","Data":"6a4cf42fcf37413618a42bb2599e35b970728850b781555d11dcb55965d25162"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.504896 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:28 crc kubenswrapper[4814]: E1202 14:31:28.505411 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:29.005391249 +0000 UTC m=+151.129753453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.519387 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mx8k5" event={"ID":"9410a720-183e-4f95-adb4-3ec7aaeb03e0","Type":"ContainerStarted","Data":"48ad86869c9b4e4f554ea235659cc79ec8b6419447082820bc19b5a49ceb5cdc"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.552519 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9fx8s" event={"ID":"6f82c67c-0abb-41a5-95eb-533ca424d192","Type":"ContainerStarted","Data":"99f1d3e1f2ee5c601826bf6dc85838168787d49e57d8a9032acb8a3336fbbdd7"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.572775 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lx28q" event={"ID":"be15a2d4-290c-45de-a493-4161afdde26f","Type":"ContainerStarted","Data":"a07233e7a09a3318223383b5a6a1810f03bdec70c9fc379ac6817de474518153"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.592223 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" event={"ID":"2816ec52-510f-49c7-a70e-4e7db428d500","Type":"ContainerStarted","Data":"b87946e89a0fb4923a594cb40eb7779e008c2906c6e861afbe55bccf97671400"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.592300 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" event={"ID":"2816ec52-510f-49c7-a70e-4e7db428d500","Type":"ContainerStarted","Data":"d986ebd990906373055521d5781dcaf7b0813f7dcc9232823c5fef216a478e32"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.608546 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:28 crc kubenswrapper[4814]: E1202 14:31:28.610138 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:29.110118236 +0000 UTC m=+151.234480440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.615878 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" event={"ID":"a6316d49-679c-4393-bae7-5ca27876cce4","Type":"ContainerStarted","Data":"dc0dc7b0fe7d03a9f2adadd72bf8952d65e89a36a3d3c8946a81b134899f97ae"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.615933 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" event={"ID":"a6316d49-679c-4393-bae7-5ca27876cce4","Type":"ContainerStarted","Data":"f8db3fe96068ac94336dddc2795788600bc0c47549af935dedc307400e392bec"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.616974 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.622096 4814 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fnp2w container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.622156 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" podUID="a6316d49-679c-4393-bae7-5ca27876cce4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.636801 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-b4rng" event={"ID":"664ee5fe-5761-497f-80d4-8b9a7059d09a","Type":"ContainerStarted","Data":"7e3c73aa9656c751af73a528b55fdbb05231846293370439e99b5ad54f5b1ecc"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.661625 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" event={"ID":"25faedf3-ec5c-42cb-804c-15fdfc7ac152","Type":"ContainerStarted","Data":"ef16fbea930b03ec5b349eca5fe9b36174d5b47f50d1d17c254e2cf4fac17c14"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.680979 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" event={"ID":"f4fcbcc9-c1bf-4735-ad50-e999ba4c9159","Type":"ContainerStarted","Data":"1161d9a46328a31c2d0b2daa92625737adcd69ec95894b801b91d14aae2cfcb4"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.689699 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" event={"ID":"38511f71-ece2-4870-a0c7-22aa90dc0eb6","Type":"ContainerStarted","Data":"4c0f565b4dcc24010bd493367142e5261cbc89d5991745e29e16cdede349196a"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.712007 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:28 crc kubenswrapper[4814]: E1202 14:31:28.712442 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:29.212419061 +0000 UTC m=+151.336781265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.712602 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:28 crc kubenswrapper[4814]: E1202 14:31:28.712943 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:29.212936614 +0000 UTC m=+151.337298808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.727919 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" event={"ID":"5e3867d8-bafc-4b43-ac79-9cbc271baaa6","Type":"ContainerStarted","Data":"9f2630df7aa2059627ca2d566967111d959f34cc30779876893c7c676408dd76"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.729695 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.733096 4814 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nbcdh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.733364 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" podUID="5e3867d8-bafc-4b43-ac79-9cbc271baaa6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.755698 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-r7wht" event={"ID":"a3f8d529-59a4-4ae7-a24f-f6b9d1881a70","Type":"ContainerStarted","Data":"c44c2d3889206986ef9e2d60eb1a805a17cf4c965482273b9d71a1cdd7f1fc38"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.755779 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-r7wht" event={"ID":"a3f8d529-59a4-4ae7-a24f-f6b9d1881a70","Type":"ContainerStarted","Data":"4bd2582b9a4faea1b978d2e1b252e077e4bbc356b44c7bd59f049af62b4a3d9a"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.772570 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" event={"ID":"a19e92eb-a898-4589-9cff-652e92e2520e","Type":"ContainerStarted","Data":"94319f7070aaee091fd34645866cce7c349fd2018448386b6b389bacc3d8154e"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.772881 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-b4rng" Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.773448 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.789971 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8mxvp" event={"ID":"a784542d-74ab-4b13-90cb-9159a9dbc173","Type":"ContainerStarted","Data":"6eb4c697adedddf08fcbdee24ef341d4a630d9f75d220e41e1616436d6168152"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.796865 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.797178 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" event={"ID":"19d86ed9-533c-4a11-a33b-74257f105a6e","Type":"ContainerStarted","Data":"375b63938bea4762a8531d7c2edf4600cbc234849781e72035dff0382cab8058"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.818584 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7xfzc"] Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.818983 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.819608 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:31:28 crc kubenswrapper[4814]: E1202 14:31:28.821413 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:29.321381927 +0000 UTC m=+151.445744131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.822552 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2c9b89c962014583bfa1074d37c02bf8b015e27dd8ba45a37eeae8d3fe82a371"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.822598 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d8521b35216b8aa05737b2ea557650b41d944c3b4887279fd84096ceb977c869"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.823691 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.832726 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5913c4baa72052c47c6be1d09a8f64203d9bb391f330d810bcd5daea9983b485"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.833273 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"17ecde8621d3779d862acae8d4ac63b72f7204ab29502044c186162caed181cc"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.833941 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.849398 4814 generic.go:334] "Generic (PLEG): container finished" podID="d1134e0d-63cf-4b7f-8f90-1d340c68d818" containerID="b162357e9ef827cd01b49985f69fae72ce2596a6dfd25439f7d4c78339ee28eb" exitCode=0 Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.849761 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" event={"ID":"d1134e0d-63cf-4b7f-8f90-1d340c68d818","Type":"ContainerDied","Data":"b162357e9ef827cd01b49985f69fae72ce2596a6dfd25439f7d4c78339ee28eb"} Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.853130 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xfzc"] Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.926613 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.926836 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-utilities\") pod \"certified-operators-7xfzc\" (UID: \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\") " pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.927012 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-54zqs" podStartSLOduration=126.926978497 podStartE2EDuration="2m6.926978497s" podCreationTimestamp="2025-12-02 14:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:28.925439497 +0000 UTC m=+151.049801701" watchObservedRunningTime="2025-12-02 14:31:28.926978497 +0000 UTC m=+151.051340701" Dec 02 14:31:28 crc kubenswrapper[4814]: E1202 14:31:28.929677 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:29.429657225 +0000 UTC m=+151.554019429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.927052 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm7qf\" (UniqueName: \"kubernetes.io/projected/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-kube-api-access-fm7qf\") pod \"certified-operators-7xfzc\" (UID: \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\") " pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:31:28 crc kubenswrapper[4814]: I1202 14:31:28.933410 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-catalog-content\") pod \"certified-operators-7xfzc\" (UID: \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\") " pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.036802 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zqrmh" podStartSLOduration=128.036770124 podStartE2EDuration="2m8.036770124s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:28.975780999 +0000 UTC m=+151.100143223" watchObservedRunningTime="2025-12-02 14:31:29.036770124 +0000 UTC m=+151.161132338" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.040397 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.042173 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-utilities\") pod \"certified-operators-7xfzc\" (UID: \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\") " pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:31:29 crc kubenswrapper[4814]: E1202 14:31:29.042288 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:29.542271525 +0000 UTC m=+151.666633729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.040918 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-utilities\") pod \"certified-operators-7xfzc\" (UID: \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\") " pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.042344 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm7qf\" (UniqueName: \"kubernetes.io/projected/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-kube-api-access-fm7qf\") pod \"certified-operators-7xfzc\" (UID: \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\") " pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.042371 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-catalog-content\") pod \"certified-operators-7xfzc\" (UID: \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\") " pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.042638 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-catalog-content\") pod \"certified-operators-7xfzc\" (UID: \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\") " pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.080847 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:29 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:29 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:29 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.081303 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.106387 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sxpmb"] Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.109802 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm7qf\" (UniqueName: \"kubernetes.io/projected/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-kube-api-access-fm7qf\") pod \"certified-operators-7xfzc\" (UID: \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\") " pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.110937 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" podStartSLOduration=89.110912856 podStartE2EDuration="1m29.110912856s" podCreationTimestamp="2025-12-02 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:29.084978691 +0000 UTC m=+151.209340895" watchObservedRunningTime="2025-12-02 14:31:29.110912856 +0000 UTC m=+151.235275060" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.125690 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sxpmb"] Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.125825 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.130358 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.143685 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:29 crc kubenswrapper[4814]: E1202 14:31:29.144421 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:29.644344424 +0000 UTC m=+151.768706628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.159807 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9fx8s" podStartSLOduration=9.1597806 podStartE2EDuration="9.1597806s" podCreationTimestamp="2025-12-02 14:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:29.158906258 +0000 UTC m=+151.283268462" watchObservedRunningTime="2025-12-02 14:31:29.1597806 +0000 UTC m=+151.284142804" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.160136 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.225911 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" podStartSLOduration=127.225891616 podStartE2EDuration="2m7.225891616s" podCreationTimestamp="2025-12-02 14:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:29.205617066 +0000 UTC m=+151.329979270" watchObservedRunningTime="2025-12-02 14:31:29.225891616 +0000 UTC m=+151.350253810" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.227167 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lvn9c"] Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.251043 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:29 crc kubenswrapper[4814]: E1202 14:31:29.251438 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:29.751419751 +0000 UTC m=+151.875781955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.251637 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.264736 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lvn9c"] Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.308966 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" podStartSLOduration=127.308938247 podStartE2EDuration="2m7.308938247s" podCreationTimestamp="2025-12-02 14:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:29.282357475 +0000 UTC m=+151.406719679" watchObservedRunningTime="2025-12-02 14:31:29.308938247 +0000 UTC m=+151.433300471" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.356178 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-r7wht" podStartSLOduration=128.356146589 podStartE2EDuration="2m8.356146589s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:29.353884841 +0000 UTC m=+151.478247065" watchObservedRunningTime="2025-12-02 14:31:29.356146589 +0000 UTC m=+151.480508793" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.358633 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-utilities\") pod \"community-operators-sxpmb\" (UID: \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\") " pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.358687 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpsmf\" (UniqueName: \"kubernetes.io/projected/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-kube-api-access-bpsmf\") pod \"community-operators-sxpmb\" (UID: \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\") " pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.358706 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f878b86f-db68-4fe8-ae1d-74a9edd15eff-utilities\") pod \"certified-operators-lvn9c\" (UID: \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\") " pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.358732 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.358768 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f878b86f-db68-4fe8-ae1d-74a9edd15eff-catalog-content\") pod \"certified-operators-lvn9c\" (UID: \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\") " pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.358792 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-catalog-content\") pod \"community-operators-sxpmb\" (UID: \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\") " pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.358812 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zb9l\" (UniqueName: \"kubernetes.io/projected/f878b86f-db68-4fe8-ae1d-74a9edd15eff-kube-api-access-5zb9l\") pod \"certified-operators-lvn9c\" (UID: \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\") " pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:31:29 crc kubenswrapper[4814]: E1202 14:31:29.359216 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:29.859181737 +0000 UTC m=+151.983543941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.462807 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.463555 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpsmf\" (UniqueName: \"kubernetes.io/projected/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-kube-api-access-bpsmf\") pod \"community-operators-sxpmb\" (UID: \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\") " pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.463584 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f878b86f-db68-4fe8-ae1d-74a9edd15eff-utilities\") pod \"certified-operators-lvn9c\" (UID: \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\") " pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.463640 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f878b86f-db68-4fe8-ae1d-74a9edd15eff-catalog-content\") pod \"certified-operators-lvn9c\" (UID: \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\") " pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.463665 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-catalog-content\") pod \"community-operators-sxpmb\" (UID: \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\") " pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.463689 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zb9l\" (UniqueName: \"kubernetes.io/projected/f878b86f-db68-4fe8-ae1d-74a9edd15eff-kube-api-access-5zb9l\") pod \"certified-operators-lvn9c\" (UID: \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\") " pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.463723 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-utilities\") pod \"community-operators-sxpmb\" (UID: \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\") " pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.464130 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-utilities\") pod \"community-operators-sxpmb\" (UID: \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\") " pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.464464 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f878b86f-db68-4fe8-ae1d-74a9edd15eff-catalog-content\") pod \"certified-operators-lvn9c\" (UID: \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\") " pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.464666 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-catalog-content\") pod \"community-operators-sxpmb\" (UID: \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\") " pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.465943 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f878b86f-db68-4fe8-ae1d-74a9edd15eff-utilities\") pod \"certified-operators-lvn9c\" (UID: \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\") " pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:31:29 crc kubenswrapper[4814]: E1202 14:31:29.466068 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:29.966040898 +0000 UTC m=+152.090403102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.468508 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5pw69"] Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.477189 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.482486 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pfbbr" podStartSLOduration=128.48247003 podStartE2EDuration="2m8.48247003s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:29.477209375 +0000 UTC m=+151.601571579" watchObservedRunningTime="2025-12-02 14:31:29.48247003 +0000 UTC m=+151.606832234" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.502009 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5pw69"] Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.523727 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zb9l\" (UniqueName: \"kubernetes.io/projected/f878b86f-db68-4fe8-ae1d-74a9edd15eff-kube-api-access-5zb9l\") pod \"certified-operators-lvn9c\" (UID: \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\") " pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.545751 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpsmf\" (UniqueName: \"kubernetes.io/projected/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-kube-api-access-bpsmf\") pod \"community-operators-sxpmb\" (UID: \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\") " pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.565270 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:29 crc kubenswrapper[4814]: E1202 14:31:29.565598 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:30.065586003 +0000 UTC m=+152.189948207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.600309 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vfvr6" podStartSLOduration=127.600286993 podStartE2EDuration="2m7.600286993s" podCreationTimestamp="2025-12-02 14:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:29.535193033 +0000 UTC m=+151.659555237" watchObservedRunningTime="2025-12-02 14:31:29.600286993 +0000 UTC m=+151.724649197" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.607551 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.674804 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.675040 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5879c1b7-4967-4059-9443-7821ccc92d59-utilities\") pod \"community-operators-5pw69\" (UID: \"5879c1b7-4967-4059-9443-7821ccc92d59\") " pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.675064 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5879c1b7-4967-4059-9443-7821ccc92d59-catalog-content\") pod \"community-operators-5pw69\" (UID: \"5879c1b7-4967-4059-9443-7821ccc92d59\") " pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.675111 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4qxg\" (UniqueName: \"kubernetes.io/projected/5879c1b7-4967-4059-9443-7821ccc92d59-kube-api-access-m4qxg\") pod \"community-operators-5pw69\" (UID: \"5879c1b7-4967-4059-9443-7821ccc92d59\") " pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:31:29 crc kubenswrapper[4814]: E1202 14:31:29.675219 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:30.175203815 +0000 UTC m=+152.299566019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.725454 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-868nw" podStartSLOduration=127.725415414 podStartE2EDuration="2m7.725415414s" podCreationTimestamp="2025-12-02 14:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:29.712679437 +0000 UTC m=+151.837041641" watchObservedRunningTime="2025-12-02 14:31:29.725415414 +0000 UTC m=+151.849777608" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.776737 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.778430 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5879c1b7-4967-4059-9443-7821ccc92d59-utilities\") pod \"community-operators-5pw69\" (UID: \"5879c1b7-4967-4059-9443-7821ccc92d59\") " pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.778459 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5879c1b7-4967-4059-9443-7821ccc92d59-catalog-content\") pod \"community-operators-5pw69\" (UID: \"5879c1b7-4967-4059-9443-7821ccc92d59\") " pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.778502 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.778538 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4qxg\" (UniqueName: \"kubernetes.io/projected/5879c1b7-4967-4059-9443-7821ccc92d59-kube-api-access-m4qxg\") pod \"community-operators-5pw69\" (UID: \"5879c1b7-4967-4059-9443-7821ccc92d59\") " pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.779913 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5879c1b7-4967-4059-9443-7821ccc92d59-utilities\") pod \"community-operators-5pw69\" (UID: \"5879c1b7-4967-4059-9443-7821ccc92d59\") " pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.780863 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5879c1b7-4967-4059-9443-7821ccc92d59-catalog-content\") pod \"community-operators-5pw69\" (UID: \"5879c1b7-4967-4059-9443-7821ccc92d59\") " pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:31:29 crc kubenswrapper[4814]: E1202 14:31:29.786601 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:30.286585963 +0000 UTC m=+152.410948167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.814672 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4qxg\" (UniqueName: \"kubernetes.io/projected/5879c1b7-4967-4059-9443-7821ccc92d59-kube-api-access-m4qxg\") pod \"community-operators-5pw69\" (UID: \"5879c1b7-4967-4059-9443-7821ccc92d59\") " pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.827051 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8mxvp" podStartSLOduration=128.827021531 podStartE2EDuration="2m8.827021531s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:29.776625488 +0000 UTC m=+151.900987692" watchObservedRunningTime="2025-12-02 14:31:29.827021531 +0000 UTC m=+151.951383735" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.837711 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" podStartSLOduration=128.837686035 podStartE2EDuration="2m8.837686035s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:29.812605811 +0000 UTC m=+151.936968015" watchObservedRunningTime="2025-12-02 14:31:29.837686035 +0000 UTC m=+151.962048229" Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.880713 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.890791 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"569f9ac7bbf31b31b7b5b1dc519f841491d3d7e58471b227d891ae345a49073d"} Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.896401 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zqrmh" event={"ID":"4811cc2a-adc3-4fb7-8f09-d9426a60fffc","Type":"ContainerStarted","Data":"d47c9a5b69356821d1425369a53436cecb6ae568baef241b04f2ea0b89d8a3a7"} Dec 02 14:31:29 crc kubenswrapper[4814]: E1202 14:31:29.898589 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:30.398553216 +0000 UTC m=+152.522915410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.899204 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:29 crc kubenswrapper[4814]: E1202 14:31:29.901246 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:30.401212505 +0000 UTC m=+152.525574709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.975253 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xfzc"] Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.989122 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8mxvp" event={"ID":"a784542d-74ab-4b13-90cb-9159a9dbc173","Type":"ContainerStarted","Data":"f511928abd52ae35fdf711e96061a5cbb5097041ac0806b0a6bfcc3cc5eb3553"} Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.991371 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mx8k5" event={"ID":"9410a720-183e-4f95-adb4-3ec7aaeb03e0","Type":"ContainerStarted","Data":"c54231b838dc69256d30dd91b9df60fdbbb22b90304d79d84d604b82fa63fdb3"} Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.991404 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mx8k5" event={"ID":"9410a720-183e-4f95-adb4-3ec7aaeb03e0","Type":"ContainerStarted","Data":"1714a0233d7611b9881ec6874054f11522e605f862267e22140fbc870eaa6df1"} Dec 02 14:31:29 crc kubenswrapper[4814]: I1202 14:31:29.992425 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-mx8k5" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.004033 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:30 crc kubenswrapper[4814]: E1202 14:31:30.005127 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:30.50510341 +0000 UTC m=+152.629465614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.014188 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-k65hc" event={"ID":"25faedf3-ec5c-42cb-804c-15fdfc7ac152","Type":"ContainerStarted","Data":"cc6cf41f5e80a85a63158fb9fb8c396f426af1f1ace5e24cb7dd718426dd6fdf"} Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.024416 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lx28q" event={"ID":"be15a2d4-290c-45de-a493-4161afdde26f","Type":"ContainerStarted","Data":"6a851a1bf79dea58cca7850ce8ac72f9781602d4c9bed0c38d532a03d44be94b"} Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.027832 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mx8k5" podStartSLOduration=10.027816973 podStartE2EDuration="10.027816973s" podCreationTimestamp="2025-12-02 14:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:30.024847437 +0000 UTC m=+152.149209651" watchObservedRunningTime="2025-12-02 14:31:30.027816973 +0000 UTC m=+152.152179177" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.055495 4814 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fnp2w container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.055563 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" podUID="a6316d49-679c-4393-bae7-5ca27876cce4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.057016 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" event={"ID":"2816ec52-510f-49c7-a70e-4e7db428d500","Type":"ContainerStarted","Data":"767a1f4a443c7d5dddfa8aa2b1f93b7bd4d4de5c7965a7a1461fa030378f431c"} Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.071607 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbcdh" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.099809 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.108579 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:30 crc kubenswrapper[4814]: E1202 14:31:30.109075 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:30.609054718 +0000 UTC m=+152.733416922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.119563 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" podStartSLOduration=128.119530977 podStartE2EDuration="2m8.119530977s" podCreationTimestamp="2025-12-02 14:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:30.077926649 +0000 UTC m=+152.202288853" watchObservedRunningTime="2025-12-02 14:31:30.119530977 +0000 UTC m=+152.243893181" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.175078 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:30 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:30 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:30 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.175544 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.211490 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:30 crc kubenswrapper[4814]: E1202 14:31:30.212995 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:30.712977934 +0000 UTC m=+152.837340138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.315188 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:30 crc kubenswrapper[4814]: E1202 14:31:30.317272 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:30.817257649 +0000 UTC m=+152.941619853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.346170 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lvn9c"] Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.420807 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:30 crc kubenswrapper[4814]: E1202 14:31:30.421162 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:30.921147155 +0000 UTC m=+153.045509349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.502314 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sxpmb"] Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.523142 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:30 crc kubenswrapper[4814]: E1202 14:31:30.523437 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.023422709 +0000 UTC m=+153.147784913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.567221 4814 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.606112 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5pw69"] Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.626211 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:30 crc kubenswrapper[4814]: E1202 14:31:30.626365 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.12633548 +0000 UTC m=+153.250697684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.626764 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:30 crc kubenswrapper[4814]: E1202 14:31:30.627087 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.127073309 +0000 UTC m=+153.251435513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.728060 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:30 crc kubenswrapper[4814]: E1202 14:31:30.728346 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.228293086 +0000 UTC m=+153.352655290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.728524 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:30 crc kubenswrapper[4814]: E1202 14:31:30.729069 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.229050826 +0000 UTC m=+153.353413040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.814653 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lcv28"] Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.816970 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.819980 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.826889 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcv28"] Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.829491 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.829638 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g5n4\" (UniqueName: \"kubernetes.io/projected/4d9788e9-39e1-4f60-be8a-891c0c224019-kube-api-access-6g5n4\") pod \"redhat-marketplace-lcv28\" (UID: \"4d9788e9-39e1-4f60-be8a-891c0c224019\") " pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:31:30 crc kubenswrapper[4814]: E1202 14:31:30.829732 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.329702148 +0000 UTC m=+153.454064352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.829967 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.830040 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9788e9-39e1-4f60-be8a-891c0c224019-utilities\") pod \"redhat-marketplace-lcv28\" (UID: \"4d9788e9-39e1-4f60-be8a-891c0c224019\") " pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.830097 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9788e9-39e1-4f60-be8a-891c0c224019-catalog-content\") pod \"redhat-marketplace-lcv28\" (UID: \"4d9788e9-39e1-4f60-be8a-891c0c224019\") " pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:31:30 crc kubenswrapper[4814]: E1202 14:31:30.830303 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.330291053 +0000 UTC m=+153.454653257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.930776 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:30 crc kubenswrapper[4814]: E1202 14:31:30.931206 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.431170422 +0000 UTC m=+153.555532626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.931295 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.931399 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9788e9-39e1-4f60-be8a-891c0c224019-utilities\") pod \"redhat-marketplace-lcv28\" (UID: \"4d9788e9-39e1-4f60-be8a-891c0c224019\") " pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.931459 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9788e9-39e1-4f60-be8a-891c0c224019-catalog-content\") pod \"redhat-marketplace-lcv28\" (UID: \"4d9788e9-39e1-4f60-be8a-891c0c224019\") " pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.931517 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g5n4\" (UniqueName: \"kubernetes.io/projected/4d9788e9-39e1-4f60-be8a-891c0c224019-kube-api-access-6g5n4\") pod \"redhat-marketplace-lcv28\" (UID: \"4d9788e9-39e1-4f60-be8a-891c0c224019\") " pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:31:30 crc kubenswrapper[4814]: E1202 14:31:30.931654 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.431638534 +0000 UTC m=+153.556000738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.931942 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9788e9-39e1-4f60-be8a-891c0c224019-utilities\") pod \"redhat-marketplace-lcv28\" (UID: \"4d9788e9-39e1-4f60-be8a-891c0c224019\") " pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.931996 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9788e9-39e1-4f60-be8a-891c0c224019-catalog-content\") pod \"redhat-marketplace-lcv28\" (UID: \"4d9788e9-39e1-4f60-be8a-891c0c224019\") " pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:31:30 crc kubenswrapper[4814]: I1202 14:31:30.955499 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g5n4\" (UniqueName: \"kubernetes.io/projected/4d9788e9-39e1-4f60-be8a-891c0c224019-kube-api-access-6g5n4\") pod \"redhat-marketplace-lcv28\" (UID: \"4d9788e9-39e1-4f60-be8a-891c0c224019\") " pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.032838 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:31 crc kubenswrapper[4814]: E1202 14:31:31.033034 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.532999925 +0000 UTC m=+153.657362129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.033127 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:31 crc kubenswrapper[4814]: E1202 14:31:31.033478 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.533471227 +0000 UTC m=+153.657833431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.044992 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:31 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:31 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:31 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.045063 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.102516 4814 generic.go:334] "Generic (PLEG): container finished" podID="5879c1b7-4967-4059-9443-7821ccc92d59" containerID="4bc8ccea31ee2269298ee62a4a2e7fcf6522a045d637017529cf9c8e2404daab" exitCode=0 Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.103387 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pw69" event={"ID":"5879c1b7-4967-4059-9443-7821ccc92d59","Type":"ContainerDied","Data":"4bc8ccea31ee2269298ee62a4a2e7fcf6522a045d637017529cf9c8e2404daab"} Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.103411 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pw69" event={"ID":"5879c1b7-4967-4059-9443-7821ccc92d59","Type":"ContainerStarted","Data":"2e9186dbd36b978ed51f6d49be0df1770d9faf9fa7b71d56ef872c40a1aeaf05"} Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.107247 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.107823 4814 generic.go:334] "Generic (PLEG): container finished" podID="f714bdcc-9476-4ed4-a164-5e6e5657e0a3" containerID="ab55601aef0ac071a50af6db1bd54cc37c36ed8ea7166355557fcfac003d67f7" exitCode=0 Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.107926 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xfzc" event={"ID":"f714bdcc-9476-4ed4-a164-5e6e5657e0a3","Type":"ContainerDied","Data":"ab55601aef0ac071a50af6db1bd54cc37c36ed8ea7166355557fcfac003d67f7"} Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.107957 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xfzc" event={"ID":"f714bdcc-9476-4ed4-a164-5e6e5657e0a3","Type":"ContainerStarted","Data":"e577d2db7688c7c43b0b567f028c422aeede99d4dd5267f70089c820094f0df1"} Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.120962 4814 generic.go:334] "Generic (PLEG): container finished" podID="f878b86f-db68-4fe8-ae1d-74a9edd15eff" containerID="d58517598000080edc2b284b0bb4fd3a3660c6ca31b9663f13eb1cff1f760391" exitCode=0 Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.121402 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lvn9c" event={"ID":"f878b86f-db68-4fe8-ae1d-74a9edd15eff","Type":"ContainerDied","Data":"d58517598000080edc2b284b0bb4fd3a3660c6ca31b9663f13eb1cff1f760391"} Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.121468 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lvn9c" event={"ID":"f878b86f-db68-4fe8-ae1d-74a9edd15eff","Type":"ContainerStarted","Data":"438e3848b9008f02ced1f5bc38290f6fc3103cd931d1a18a9f14b9ef98b14e67"} Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.134516 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:31 crc kubenswrapper[4814]: E1202 14:31:31.136414 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.634733245 +0000 UTC m=+153.759095449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.136516 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:31 crc kubenswrapper[4814]: E1202 14:31:31.137695 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.637675291 +0000 UTC m=+153.762037495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.150960 4814 generic.go:334] "Generic (PLEG): container finished" podID="19d86ed9-533c-4a11-a33b-74257f105a6e" containerID="375b63938bea4762a8531d7c2edf4600cbc234849781e72035dff0382cab8058" exitCode=0 Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.151056 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" event={"ID":"19d86ed9-533c-4a11-a33b-74257f105a6e","Type":"ContainerDied","Data":"375b63938bea4762a8531d7c2edf4600cbc234849781e72035dff0382cab8058"} Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.165554 4814 generic.go:334] "Generic (PLEG): container finished" podID="638fc62a-4504-4762-bbaa-e7a6a6dfbca0" containerID="2af201d318f6e89ff3d54714e2fd25c0b20ef0ca1538a86f295534fda1c08d28" exitCode=0 Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.165954 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxpmb" event={"ID":"638fc62a-4504-4762-bbaa-e7a6a6dfbca0","Type":"ContainerDied","Data":"2af201d318f6e89ff3d54714e2fd25c0b20ef0ca1538a86f295534fda1c08d28"} Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.166032 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxpmb" event={"ID":"638fc62a-4504-4762-bbaa-e7a6a6dfbca0","Type":"ContainerStarted","Data":"262148ac0824976a3012214fd3a7cb2ad543eb55184b7bae1a2fc87e53609b29"} Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.175980 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lx28q" event={"ID":"be15a2d4-290c-45de-a493-4161afdde26f","Type":"ContainerStarted","Data":"ed6ec320bd8f4427f6ddb4a4a57de23a237e7f3786847ec8ffc1d2fcedd8cee4"} Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.176030 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lx28q" event={"ID":"be15a2d4-290c-45de-a493-4161afdde26f","Type":"ContainerStarted","Data":"9a1977e3a22c4fddb94c276b9075f831d103378fc27d9d27f7a765fd905bb9f4"} Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.178959 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" event={"ID":"d1134e0d-63cf-4b7f-8f90-1d340c68d818","Type":"ContainerStarted","Data":"b33df6c76b3575d8ac9f61cb4e8d043d1b519a4b473ce2082134c1ccc8735a93"} Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.181346 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.185278 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.210821 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lhgkx"] Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.211971 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.212703 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.231480 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhgkx"] Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.240327 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.240778 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4388da57-5087-4bb1-8ceb-55603a4ddf9b-catalog-content\") pod \"redhat-marketplace-lhgkx\" (UID: \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\") " pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.240818 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4388da57-5087-4bb1-8ceb-55603a4ddf9b-utilities\") pod \"redhat-marketplace-lhgkx\" (UID: \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\") " pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.241924 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8nb5\" (UniqueName: \"kubernetes.io/projected/4388da57-5087-4bb1-8ceb-55603a4ddf9b-kube-api-access-l8nb5\") pod \"redhat-marketplace-lhgkx\" (UID: \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\") " pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:31:31 crc kubenswrapper[4814]: E1202 14:31:31.242747 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.742720206 +0000 UTC m=+153.867082400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.289645 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" podStartSLOduration=129.289619349 podStartE2EDuration="2m9.289619349s" podCreationTimestamp="2025-12-02 14:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:31.289212659 +0000 UTC m=+153.413574863" watchObservedRunningTime="2025-12-02 14:31:31.289619349 +0000 UTC m=+153.413981553" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.344185 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.344301 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4388da57-5087-4bb1-8ceb-55603a4ddf9b-catalog-content\") pod \"redhat-marketplace-lhgkx\" (UID: \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\") " pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.344330 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4388da57-5087-4bb1-8ceb-55603a4ddf9b-utilities\") pod \"redhat-marketplace-lhgkx\" (UID: \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\") " pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.344366 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8nb5\" (UniqueName: \"kubernetes.io/projected/4388da57-5087-4bb1-8ceb-55603a4ddf9b-kube-api-access-l8nb5\") pod \"redhat-marketplace-lhgkx\" (UID: \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\") " pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:31:31 crc kubenswrapper[4814]: E1202 14:31:31.345176 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 14:31:31.845160225 +0000 UTC m=+153.969522439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hvzjs" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.345679 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4388da57-5087-4bb1-8ceb-55603a4ddf9b-catalog-content\") pod \"redhat-marketplace-lhgkx\" (UID: \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\") " pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.345967 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4388da57-5087-4bb1-8ceb-55603a4ddf9b-utilities\") pod \"redhat-marketplace-lhgkx\" (UID: \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\") " pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.372854 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8nb5\" (UniqueName: \"kubernetes.io/projected/4388da57-5087-4bb1-8ceb-55603a4ddf9b-kube-api-access-l8nb5\") pod \"redhat-marketplace-lhgkx\" (UID: \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\") " pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.421676 4814 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-02T14:31:30.567280505Z","Handler":null,"Name":""} Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.429722 4814 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.429769 4814 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.446903 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.452060 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.492539 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcv28"] Dec 02 14:31:31 crc kubenswrapper[4814]: W1202 14:31:31.500490 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d9788e9_39e1_4f60_be8a_891c0c224019.slice/crio-3305fa127c8c08df9132c828ccb2e11649a138e06c4f04a1752adbc37f692f0d WatchSource:0}: Error finding container 3305fa127c8c08df9132c828ccb2e11649a138e06c4f04a1752adbc37f692f0d: Status 404 returned error can't find the container with id 3305fa127c8c08df9132c828ccb2e11649a138e06c4f04a1752adbc37f692f0d Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.536115 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.549016 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.551948 4814 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.551994 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.573811 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hvzjs\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.742948 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.812106 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhgkx"] Dec 02 14:31:31 crc kubenswrapper[4814]: W1202 14:31:31.823337 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4388da57_5087_4bb1_8ceb_55603a4ddf9b.slice/crio-b396a80e88638ce5000eb8463ed121b1fecd6603aa18de196ec858b64dd89be1 WatchSource:0}: Error finding container b396a80e88638ce5000eb8463ed121b1fecd6603aa18de196ec858b64dd89be1: Status 404 returned error can't find the container with id b396a80e88638ce5000eb8463ed121b1fecd6603aa18de196ec858b64dd89be1 Dec 02 14:31:31 crc kubenswrapper[4814]: I1202 14:31:31.970644 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hvzjs"] Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.009853 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v44dt"] Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.016547 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.031085 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.044035 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.044196 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:32 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:32 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:32 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.044271 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.051279 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.055147 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.059761 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.117358 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v44dt"] Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.117419 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.162644 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61a2a03e-98ed-43a5-981a-2c1bcafb1368-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"61a2a03e-98ed-43a5-981a-2c1bcafb1368\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.162697 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70239f99-6b21-408d-9506-f1b9afbafd90-utilities\") pod \"redhat-operators-v44dt\" (UID: \"70239f99-6b21-408d-9506-f1b9afbafd90\") " pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.162749 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70239f99-6b21-408d-9506-f1b9afbafd90-catalog-content\") pod \"redhat-operators-v44dt\" (UID: \"70239f99-6b21-408d-9506-f1b9afbafd90\") " pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.162774 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcqpp\" (UniqueName: \"kubernetes.io/projected/70239f99-6b21-408d-9506-f1b9afbafd90-kube-api-access-mcqpp\") pod \"redhat-operators-v44dt\" (UID: \"70239f99-6b21-408d-9506-f1b9afbafd90\") " pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.162794 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61a2a03e-98ed-43a5-981a-2c1bcafb1368-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"61a2a03e-98ed-43a5-981a-2c1bcafb1368\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.164784 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.165626 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.206966 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5vc2w" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.233106 4814 generic.go:334] "Generic (PLEG): container finished" podID="4d9788e9-39e1-4f60-be8a-891c0c224019" containerID="e2d6dc836295e7bbb6a1bcccfc3c1f35286b4054a70ab64b6316a938d2958f39" exitCode=0 Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.233215 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcv28" event={"ID":"4d9788e9-39e1-4f60-be8a-891c0c224019","Type":"ContainerDied","Data":"e2d6dc836295e7bbb6a1bcccfc3c1f35286b4054a70ab64b6316a938d2958f39"} Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.233266 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcv28" event={"ID":"4d9788e9-39e1-4f60-be8a-891c0c224019","Type":"ContainerStarted","Data":"3305fa127c8c08df9132c828ccb2e11649a138e06c4f04a1752adbc37f692f0d"} Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.240560 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lx28q" event={"ID":"be15a2d4-290c-45de-a493-4161afdde26f","Type":"ContainerStarted","Data":"05cc002994c791b30b316ec4f73bcb95f5b8c8ea122c206306cb263cd292d162"} Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.241965 4814 patch_prober.go:28] interesting pod/downloads-7954f5f757-tn4k9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.242038 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tn4k9" podUID="c8737e3b-a2b1-4b23-90bc-5dc758823ae9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.242304 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" event={"ID":"6a507d1f-367a-4ad8-b436-592c077be1f1","Type":"ContainerStarted","Data":"8776fb73cf786131290372486852b715ab9ec9217e851da2672e62f5c7a5a7e3"} Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.242433 4814 patch_prober.go:28] interesting pod/downloads-7954f5f757-tn4k9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.242526 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tn4k9" podUID="c8737e3b-a2b1-4b23-90bc-5dc758823ae9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.245285 4814 generic.go:334] "Generic (PLEG): container finished" podID="4388da57-5087-4bb1-8ceb-55603a4ddf9b" containerID="92d1f9902eecdac1037c7bbd912a6182aba1cbfff44d3521f63216752b6b07f9" exitCode=0 Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.247135 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhgkx" event={"ID":"4388da57-5087-4bb1-8ceb-55603a4ddf9b","Type":"ContainerDied","Data":"92d1f9902eecdac1037c7bbd912a6182aba1cbfff44d3521f63216752b6b07f9"} Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.247194 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhgkx" event={"ID":"4388da57-5087-4bb1-8ceb-55603a4ddf9b","Type":"ContainerStarted","Data":"b396a80e88638ce5000eb8463ed121b1fecd6603aa18de196ec858b64dd89be1"} Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.264042 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61a2a03e-98ed-43a5-981a-2c1bcafb1368-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"61a2a03e-98ed-43a5-981a-2c1bcafb1368\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.264137 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61a2a03e-98ed-43a5-981a-2c1bcafb1368-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"61a2a03e-98ed-43a5-981a-2c1bcafb1368\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.264158 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70239f99-6b21-408d-9506-f1b9afbafd90-utilities\") pod \"redhat-operators-v44dt\" (UID: \"70239f99-6b21-408d-9506-f1b9afbafd90\") " pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.264198 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70239f99-6b21-408d-9506-f1b9afbafd90-catalog-content\") pod \"redhat-operators-v44dt\" (UID: \"70239f99-6b21-408d-9506-f1b9afbafd90\") " pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.264219 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcqpp\" (UniqueName: \"kubernetes.io/projected/70239f99-6b21-408d-9506-f1b9afbafd90-kube-api-access-mcqpp\") pod \"redhat-operators-v44dt\" (UID: \"70239f99-6b21-408d-9506-f1b9afbafd90\") " pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.264646 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61a2a03e-98ed-43a5-981a-2c1bcafb1368-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"61a2a03e-98ed-43a5-981a-2c1bcafb1368\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.265465 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70239f99-6b21-408d-9506-f1b9afbafd90-utilities\") pod \"redhat-operators-v44dt\" (UID: \"70239f99-6b21-408d-9506-f1b9afbafd90\") " pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.265500 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70239f99-6b21-408d-9506-f1b9afbafd90-catalog-content\") pod \"redhat-operators-v44dt\" (UID: \"70239f99-6b21-408d-9506-f1b9afbafd90\") " pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.319589 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-lx28q" podStartSLOduration=12.319571267 podStartE2EDuration="12.319571267s" podCreationTimestamp="2025-12-02 14:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:32.318547631 +0000 UTC m=+154.442909835" watchObservedRunningTime="2025-12-02 14:31:32.319571267 +0000 UTC m=+154.443933471" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.320090 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61a2a03e-98ed-43a5-981a-2c1bcafb1368-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"61a2a03e-98ed-43a5-981a-2c1bcafb1368\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.327182 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcqpp\" (UniqueName: \"kubernetes.io/projected/70239f99-6b21-408d-9506-f1b9afbafd90-kube-api-access-mcqpp\") pod \"redhat-operators-v44dt\" (UID: \"70239f99-6b21-408d-9506-f1b9afbafd90\") " pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.351271 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.416984 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bc9fk"] Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.418867 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.434033 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.436485 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bc9fk"] Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.570334 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b51452-43ab-409e-b220-9032343f5f9c-utilities\") pod \"redhat-operators-bc9fk\" (UID: \"02b51452-43ab-409e-b220-9032343f5f9c\") " pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.570404 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfmmw\" (UniqueName: \"kubernetes.io/projected/02b51452-43ab-409e-b220-9032343f5f9c-kube-api-access-cfmmw\") pod \"redhat-operators-bc9fk\" (UID: \"02b51452-43ab-409e-b220-9032343f5f9c\") " pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.570439 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b51452-43ab-409e-b220-9032343f5f9c-catalog-content\") pod \"redhat-operators-bc9fk\" (UID: \"02b51452-43ab-409e-b220-9032343f5f9c\") " pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.671762 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b51452-43ab-409e-b220-9032343f5f9c-utilities\") pod \"redhat-operators-bc9fk\" (UID: \"02b51452-43ab-409e-b220-9032343f5f9c\") " pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.671827 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfmmw\" (UniqueName: \"kubernetes.io/projected/02b51452-43ab-409e-b220-9032343f5f9c-kube-api-access-cfmmw\") pod \"redhat-operators-bc9fk\" (UID: \"02b51452-43ab-409e-b220-9032343f5f9c\") " pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.671867 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b51452-43ab-409e-b220-9032343f5f9c-catalog-content\") pod \"redhat-operators-bc9fk\" (UID: \"02b51452-43ab-409e-b220-9032343f5f9c\") " pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.672484 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b51452-43ab-409e-b220-9032343f5f9c-catalog-content\") pod \"redhat-operators-bc9fk\" (UID: \"02b51452-43ab-409e-b220-9032343f5f9c\") " pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.675445 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b51452-43ab-409e-b220-9032343f5f9c-utilities\") pod \"redhat-operators-bc9fk\" (UID: \"02b51452-43ab-409e-b220-9032343f5f9c\") " pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.689626 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.689747 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.694611 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.694687 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.695682 4814 patch_prober.go:28] interesting pod/console-f9d7485db-794sp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.696253 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-794sp" podUID="9fa2a78f-22e9-4a5a-9224-aa9db469d17a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.697360 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfmmw\" (UniqueName: \"kubernetes.io/projected/02b51452-43ab-409e-b220-9032343f5f9c-kube-api-access-cfmmw\") pod \"redhat-operators-bc9fk\" (UID: \"02b51452-43ab-409e-b220-9032343f5f9c\") " pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.709376 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.712117 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.745070 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v44dt"] Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.765693 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.869590 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.876953 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19d86ed9-533c-4a11-a33b-74257f105a6e-secret-volume\") pod \"19d86ed9-533c-4a11-a33b-74257f105a6e\" (UID: \"19d86ed9-533c-4a11-a33b-74257f105a6e\") " Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.877068 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5hrc\" (UniqueName: \"kubernetes.io/projected/19d86ed9-533c-4a11-a33b-74257f105a6e-kube-api-access-k5hrc\") pod \"19d86ed9-533c-4a11-a33b-74257f105a6e\" (UID: \"19d86ed9-533c-4a11-a33b-74257f105a6e\") " Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.877132 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19d86ed9-533c-4a11-a33b-74257f105a6e-config-volume\") pod \"19d86ed9-533c-4a11-a33b-74257f105a6e\" (UID: \"19d86ed9-533c-4a11-a33b-74257f105a6e\") " Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.877734 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19d86ed9-533c-4a11-a33b-74257f105a6e-config-volume" (OuterVolumeSpecName: "config-volume") pod "19d86ed9-533c-4a11-a33b-74257f105a6e" (UID: "19d86ed9-533c-4a11-a33b-74257f105a6e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.884380 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d86ed9-533c-4a11-a33b-74257f105a6e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "19d86ed9-533c-4a11-a33b-74257f105a6e" (UID: "19d86ed9-533c-4a11-a33b-74257f105a6e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.893346 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d86ed9-533c-4a11-a33b-74257f105a6e-kube-api-access-k5hrc" (OuterVolumeSpecName: "kube-api-access-k5hrc") pod "19d86ed9-533c-4a11-a33b-74257f105a6e" (UID: "19d86ed9-533c-4a11-a33b-74257f105a6e"). InnerVolumeSpecName "kube-api-access-k5hrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:31:32 crc kubenswrapper[4814]: W1202 14:31:32.903542 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod61a2a03e_98ed_43a5_981a_2c1bcafb1368.slice/crio-d431f38b254c89337f686b4333e44afb75a9891f13cd4775c7f62d8119f8edd5 WatchSource:0}: Error finding container d431f38b254c89337f686b4333e44afb75a9891f13cd4775c7f62d8119f8edd5: Status 404 returned error can't find the container with id d431f38b254c89337f686b4333e44afb75a9891f13cd4775c7f62d8119f8edd5 Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.978809 4814 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19d86ed9-533c-4a11-a33b-74257f105a6e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.978872 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5hrc\" (UniqueName: \"kubernetes.io/projected/19d86ed9-533c-4a11-a33b-74257f105a6e-kube-api-access-k5hrc\") on node \"crc\" DevicePath \"\"" Dec 02 14:31:32 crc kubenswrapper[4814]: I1202 14:31:32.978883 4814 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19d86ed9-533c-4a11-a33b-74257f105a6e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.036982 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.043977 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:33 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:33 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:33 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.044096 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.280347 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" event={"ID":"6a507d1f-367a-4ad8-b436-592c077be1f1","Type":"ContainerStarted","Data":"ca077ed28836578350bdfea51d1546b67a7125f0f485d8dc6885f5065612fc0c"} Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.281238 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.303367 4814 generic.go:334] "Generic (PLEG): container finished" podID="70239f99-6b21-408d-9506-f1b9afbafd90" containerID="2fb7e07005f0478337b2a175650b32c1818e24c1ad0c0d1f950aa037272b5aea" exitCode=0 Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.303533 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v44dt" event={"ID":"70239f99-6b21-408d-9506-f1b9afbafd90","Type":"ContainerDied","Data":"2fb7e07005f0478337b2a175650b32c1818e24c1ad0c0d1f950aa037272b5aea"} Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.303577 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v44dt" event={"ID":"70239f99-6b21-408d-9506-f1b9afbafd90","Type":"ContainerStarted","Data":"04070dc0ce56073ff769def7d2b3d08b51616ae92cdb89db552b201cef7f53de"} Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.309907 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" podStartSLOduration=132.309853507 podStartE2EDuration="2m12.309853507s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:33.301190895 +0000 UTC m=+155.425553109" watchObservedRunningTime="2025-12-02 14:31:33.309853507 +0000 UTC m=+155.434215731" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.324660 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"61a2a03e-98ed-43a5-981a-2c1bcafb1368","Type":"ContainerStarted","Data":"d431f38b254c89337f686b4333e44afb75a9891f13cd4775c7f62d8119f8edd5"} Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.335072 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.336610 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 14:31:33 crc kubenswrapper[4814]: E1202 14:31:33.336880 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d86ed9-533c-4a11-a33b-74257f105a6e" containerName="collect-profiles" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.336902 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d86ed9-533c-4a11-a33b-74257f105a6e" containerName="collect-profiles" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.337013 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d86ed9-533c-4a11-a33b-74257f105a6e" containerName="collect-profiles" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.337389 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm" event={"ID":"19d86ed9-533c-4a11-a33b-74257f105a6e","Type":"ContainerDied","Data":"ee9a875d1c7020e1e4ccda87f34eabf10b1c5ebf63e07770484b225c68bb02ad"} Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.337425 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee9a875d1c7020e1e4ccda87f34eabf10b1c5ebf63e07770484b225c68bb02ad" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.337503 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.342466 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.342646 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.349212 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xc88w" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.353558 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.448491 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bc9fk"] Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.504384 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a580658-b4ec-48be-bebd-adc81c7bac56-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4a580658-b4ec-48be-bebd-adc81c7bac56\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.504593 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a580658-b4ec-48be-bebd-adc81c7bac56-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4a580658-b4ec-48be-bebd-adc81c7bac56\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.606493 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a580658-b4ec-48be-bebd-adc81c7bac56-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4a580658-b4ec-48be-bebd-adc81c7bac56\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.606561 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a580658-b4ec-48be-bebd-adc81c7bac56-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4a580658-b4ec-48be-bebd-adc81c7bac56\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.606639 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a580658-b4ec-48be-bebd-adc81c7bac56-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4a580658-b4ec-48be-bebd-adc81c7bac56\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.644818 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a580658-b4ec-48be-bebd-adc81c7bac56-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4a580658-b4ec-48be-bebd-adc81c7bac56\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 14:31:33 crc kubenswrapper[4814]: I1202 14:31:33.675971 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 14:31:34 crc kubenswrapper[4814]: I1202 14:31:34.050267 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:34 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:34 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:34 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:34 crc kubenswrapper[4814]: I1202 14:31:34.050749 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:34 crc kubenswrapper[4814]: I1202 14:31:34.064017 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 14:31:34 crc kubenswrapper[4814]: W1202 14:31:34.090265 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4a580658_b4ec_48be_bebd_adc81c7bac56.slice/crio-4d2801993109d6ba91206d32cac26d4334b5a07a9e999cb00224e3e017eff894 WatchSource:0}: Error finding container 4d2801993109d6ba91206d32cac26d4334b5a07a9e999cb00224e3e017eff894: Status 404 returned error can't find the container with id 4d2801993109d6ba91206d32cac26d4334b5a07a9e999cb00224e3e017eff894 Dec 02 14:31:34 crc kubenswrapper[4814]: I1202 14:31:34.362090 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4a580658-b4ec-48be-bebd-adc81c7bac56","Type":"ContainerStarted","Data":"4d2801993109d6ba91206d32cac26d4334b5a07a9e999cb00224e3e017eff894"} Dec 02 14:31:34 crc kubenswrapper[4814]: I1202 14:31:34.366591 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"61a2a03e-98ed-43a5-981a-2c1bcafb1368","Type":"ContainerDied","Data":"a9db1f4bcccb3c0587863cb977cc2e56cdd29fc00b27a25cfbfe1b7a6e54c174"} Dec 02 14:31:34 crc kubenswrapper[4814]: I1202 14:31:34.366564 4814 generic.go:334] "Generic (PLEG): container finished" podID="61a2a03e-98ed-43a5-981a-2c1bcafb1368" containerID="a9db1f4bcccb3c0587863cb977cc2e56cdd29fc00b27a25cfbfe1b7a6e54c174" exitCode=0 Dec 02 14:31:34 crc kubenswrapper[4814]: I1202 14:31:34.429397 4814 generic.go:334] "Generic (PLEG): container finished" podID="02b51452-43ab-409e-b220-9032343f5f9c" containerID="5977e35d43b2ddce9bf7b0e4e00f7debf61f31adc4896551c9480f2d9be03461" exitCode=0 Dec 02 14:31:34 crc kubenswrapper[4814]: I1202 14:31:34.429673 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc9fk" event={"ID":"02b51452-43ab-409e-b220-9032343f5f9c","Type":"ContainerDied","Data":"5977e35d43b2ddce9bf7b0e4e00f7debf61f31adc4896551c9480f2d9be03461"} Dec 02 14:31:34 crc kubenswrapper[4814]: I1202 14:31:34.429727 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc9fk" event={"ID":"02b51452-43ab-409e-b220-9032343f5f9c","Type":"ContainerStarted","Data":"1129a5eea10591e354c430c908e1cf49ee85a65b83832d0cf201182ab9d47a98"} Dec 02 14:31:35 crc kubenswrapper[4814]: I1202 14:31:35.054684 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:35 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:35 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:35 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:35 crc kubenswrapper[4814]: I1202 14:31:35.054767 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:35 crc kubenswrapper[4814]: I1202 14:31:35.481873 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4a580658-b4ec-48be-bebd-adc81c7bac56","Type":"ContainerStarted","Data":"84cdc356e559411d92f7d87d813de3f579fafd01797567cc72e931f1f5a391e2"} Dec 02 14:31:35 crc kubenswrapper[4814]: I1202 14:31:35.500781 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.5007609840000002 podStartE2EDuration="2.500760984s" podCreationTimestamp="2025-12-02 14:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:31:35.497444848 +0000 UTC m=+157.621807062" watchObservedRunningTime="2025-12-02 14:31:35.500760984 +0000 UTC m=+157.625123188" Dec 02 14:31:35 crc kubenswrapper[4814]: I1202 14:31:35.878921 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 14:31:36 crc kubenswrapper[4814]: I1202 14:31:36.042190 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:36 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:36 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:36 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:36 crc kubenswrapper[4814]: I1202 14:31:36.042282 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:36 crc kubenswrapper[4814]: I1202 14:31:36.076382 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61a2a03e-98ed-43a5-981a-2c1bcafb1368-kube-api-access\") pod \"61a2a03e-98ed-43a5-981a-2c1bcafb1368\" (UID: \"61a2a03e-98ed-43a5-981a-2c1bcafb1368\") " Dec 02 14:31:36 crc kubenswrapper[4814]: I1202 14:31:36.076616 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61a2a03e-98ed-43a5-981a-2c1bcafb1368-kubelet-dir\") pod \"61a2a03e-98ed-43a5-981a-2c1bcafb1368\" (UID: \"61a2a03e-98ed-43a5-981a-2c1bcafb1368\") " Dec 02 14:31:36 crc kubenswrapper[4814]: I1202 14:31:36.076724 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61a2a03e-98ed-43a5-981a-2c1bcafb1368-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "61a2a03e-98ed-43a5-981a-2c1bcafb1368" (UID: "61a2a03e-98ed-43a5-981a-2c1bcafb1368"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:31:36 crc kubenswrapper[4814]: I1202 14:31:36.077276 4814 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61a2a03e-98ed-43a5-981a-2c1bcafb1368-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 14:31:36 crc kubenswrapper[4814]: I1202 14:31:36.088497 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a2a03e-98ed-43a5-981a-2c1bcafb1368-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "61a2a03e-98ed-43a5-981a-2c1bcafb1368" (UID: "61a2a03e-98ed-43a5-981a-2c1bcafb1368"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:31:36 crc kubenswrapper[4814]: I1202 14:31:36.179065 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61a2a03e-98ed-43a5-981a-2c1bcafb1368-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 14:31:36 crc kubenswrapper[4814]: I1202 14:31:36.497648 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"61a2a03e-98ed-43a5-981a-2c1bcafb1368","Type":"ContainerDied","Data":"d431f38b254c89337f686b4333e44afb75a9891f13cd4775c7f62d8119f8edd5"} Dec 02 14:31:36 crc kubenswrapper[4814]: I1202 14:31:36.497685 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 14:31:36 crc kubenswrapper[4814]: I1202 14:31:36.497695 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d431f38b254c89337f686b4333e44afb75a9891f13cd4775c7f62d8119f8edd5" Dec 02 14:31:36 crc kubenswrapper[4814]: I1202 14:31:36.501852 4814 generic.go:334] "Generic (PLEG): container finished" podID="4a580658-b4ec-48be-bebd-adc81c7bac56" containerID="84cdc356e559411d92f7d87d813de3f579fafd01797567cc72e931f1f5a391e2" exitCode=0 Dec 02 14:31:36 crc kubenswrapper[4814]: I1202 14:31:36.501894 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4a580658-b4ec-48be-bebd-adc81c7bac56","Type":"ContainerDied","Data":"84cdc356e559411d92f7d87d813de3f579fafd01797567cc72e931f1f5a391e2"} Dec 02 14:31:37 crc kubenswrapper[4814]: I1202 14:31:37.050350 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:37 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:37 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:37 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:37 crc kubenswrapper[4814]: I1202 14:31:37.050430 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:37 crc kubenswrapper[4814]: I1202 14:31:37.871426 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 14:31:37 crc kubenswrapper[4814]: I1202 14:31:37.917572 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a580658-b4ec-48be-bebd-adc81c7bac56-kube-api-access\") pod \"4a580658-b4ec-48be-bebd-adc81c7bac56\" (UID: \"4a580658-b4ec-48be-bebd-adc81c7bac56\") " Dec 02 14:31:37 crc kubenswrapper[4814]: I1202 14:31:37.917761 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a580658-b4ec-48be-bebd-adc81c7bac56-kubelet-dir\") pod \"4a580658-b4ec-48be-bebd-adc81c7bac56\" (UID: \"4a580658-b4ec-48be-bebd-adc81c7bac56\") " Dec 02 14:31:37 crc kubenswrapper[4814]: I1202 14:31:37.918483 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a580658-b4ec-48be-bebd-adc81c7bac56-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4a580658-b4ec-48be-bebd-adc81c7bac56" (UID: "4a580658-b4ec-48be-bebd-adc81c7bac56"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:31:37 crc kubenswrapper[4814]: I1202 14:31:37.930569 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a580658-b4ec-48be-bebd-adc81c7bac56-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4a580658-b4ec-48be-bebd-adc81c7bac56" (UID: "4a580658-b4ec-48be-bebd-adc81c7bac56"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:31:38 crc kubenswrapper[4814]: I1202 14:31:38.026579 4814 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a580658-b4ec-48be-bebd-adc81c7bac56-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 14:31:38 crc kubenswrapper[4814]: I1202 14:31:38.026943 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a580658-b4ec-48be-bebd-adc81c7bac56-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 14:31:38 crc kubenswrapper[4814]: I1202 14:31:38.042689 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:38 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:38 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:38 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:38 crc kubenswrapper[4814]: I1202 14:31:38.042759 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:38 crc kubenswrapper[4814]: I1202 14:31:38.487968 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mx8k5" Dec 02 14:31:38 crc kubenswrapper[4814]: I1202 14:31:38.581113 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4a580658-b4ec-48be-bebd-adc81c7bac56","Type":"ContainerDied","Data":"4d2801993109d6ba91206d32cac26d4334b5a07a9e999cb00224e3e017eff894"} Dec 02 14:31:38 crc kubenswrapper[4814]: I1202 14:31:38.581169 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d2801993109d6ba91206d32cac26d4334b5a07a9e999cb00224e3e017eff894" Dec 02 14:31:38 crc kubenswrapper[4814]: I1202 14:31:38.581257 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 14:31:39 crc kubenswrapper[4814]: I1202 14:31:39.039560 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:39 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:39 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:39 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:39 crc kubenswrapper[4814]: I1202 14:31:39.039629 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:40 crc kubenswrapper[4814]: I1202 14:31:40.043326 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:40 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:40 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:40 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:40 crc kubenswrapper[4814]: I1202 14:31:40.043401 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:41 crc kubenswrapper[4814]: I1202 14:31:41.042670 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:41 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:41 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:41 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:41 crc kubenswrapper[4814]: I1202 14:31:41.043268 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:42 crc kubenswrapper[4814]: I1202 14:31:42.041208 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:42 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:42 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:42 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:42 crc kubenswrapper[4814]: I1202 14:31:42.041306 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:42 crc kubenswrapper[4814]: I1202 14:31:42.255820 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tn4k9" Dec 02 14:31:42 crc kubenswrapper[4814]: I1202 14:31:42.690206 4814 patch_prober.go:28] interesting pod/console-f9d7485db-794sp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 02 14:31:42 crc kubenswrapper[4814]: I1202 14:31:42.690496 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-794sp" podUID="9fa2a78f-22e9-4a5a-9224-aa9db469d17a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 02 14:31:43 crc kubenswrapper[4814]: I1202 14:31:43.040280 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:43 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:43 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:43 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:43 crc kubenswrapper[4814]: I1202 14:31:43.040352 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:44 crc kubenswrapper[4814]: I1202 14:31:44.043298 4814 patch_prober.go:28] interesting pod/router-default-5444994796-qbj7r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 14:31:44 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Dec 02 14:31:44 crc kubenswrapper[4814]: [+]process-running ok Dec 02 14:31:44 crc kubenswrapper[4814]: healthz check failed Dec 02 14:31:44 crc kubenswrapper[4814]: I1202 14:31:44.043689 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qbj7r" podUID="9f6f57f3-b8c7-4c71-b7bc-627e6ba30686" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:31:44 crc kubenswrapper[4814]: I1202 14:31:44.140097 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs\") pod \"network-metrics-daemon-5n4wm\" (UID: \"2ad0a031-3e26-4596-a7a9-cd00a95b7650\") " pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:31:44 crc kubenswrapper[4814]: I1202 14:31:44.147055 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ad0a031-3e26-4596-a7a9-cd00a95b7650-metrics-certs\") pod \"network-metrics-daemon-5n4wm\" (UID: \"2ad0a031-3e26-4596-a7a9-cd00a95b7650\") " pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:31:44 crc kubenswrapper[4814]: I1202 14:31:44.364362 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5n4wm" Dec 02 14:31:45 crc kubenswrapper[4814]: I1202 14:31:45.042275 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:45 crc kubenswrapper[4814]: I1202 14:31:45.045489 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qbj7r" Dec 02 14:31:50 crc kubenswrapper[4814]: I1202 14:31:50.650845 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:31:50 crc kubenswrapper[4814]: I1202 14:31:50.652333 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:31:51 crc kubenswrapper[4814]: I1202 14:31:51.753987 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:31:53 crc kubenswrapper[4814]: I1202 14:31:53.073027 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:31:53 crc kubenswrapper[4814]: I1202 14:31:53.076954 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:32:03 crc kubenswrapper[4814]: I1202 14:32:03.085922 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nblt7" Dec 02 14:32:03 crc kubenswrapper[4814]: E1202 14:32:03.612183 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 02 14:32:03 crc kubenswrapper[4814]: E1202 14:32:03.612807 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fm7qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7xfzc_openshift-marketplace(f714bdcc-9476-4ed4-a164-5e6e5657e0a3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 14:32:03 crc kubenswrapper[4814]: E1202 14:32:03.613982 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7xfzc" podUID="f714bdcc-9476-4ed4-a164-5e6e5657e0a3" Dec 02 14:32:03 crc kubenswrapper[4814]: E1202 14:32:03.663443 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 02 14:32:03 crc kubenswrapper[4814]: E1202 14:32:03.663964 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcqpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-v44dt_openshift-marketplace(70239f99-6b21-408d-9506-f1b9afbafd90): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 14:32:03 crc kubenswrapper[4814]: E1202 14:32:03.665392 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-v44dt" podUID="70239f99-6b21-408d-9506-f1b9afbafd90" Dec 02 14:32:03 crc kubenswrapper[4814]: E1202 14:32:03.677595 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 02 14:32:03 crc kubenswrapper[4814]: E1202 14:32:03.677769 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8nb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lhgkx_openshift-marketplace(4388da57-5087-4bb1-8ceb-55603a4ddf9b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 14:32:03 crc kubenswrapper[4814]: E1202 14:32:03.678917 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lhgkx" podUID="4388da57-5087-4bb1-8ceb-55603a4ddf9b" Dec 02 14:32:03 crc kubenswrapper[4814]: I1202 14:32:03.991872 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5n4wm"] Dec 02 14:32:04 crc kubenswrapper[4814]: W1202 14:32:04.009350 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ad0a031_3e26_4596_a7a9_cd00a95b7650.slice/crio-150be29cac38c7c576d2849c2ccf6b1f22cead2bb65e76e865fe3d29092bc67b WatchSource:0}: Error finding container 150be29cac38c7c576d2849c2ccf6b1f22cead2bb65e76e865fe3d29092bc67b: Status 404 returned error can't find the container with id 150be29cac38c7c576d2849c2ccf6b1f22cead2bb65e76e865fe3d29092bc67b Dec 02 14:32:04 crc kubenswrapper[4814]: I1202 14:32:04.343598 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc9fk" event={"ID":"02b51452-43ab-409e-b220-9032343f5f9c","Type":"ContainerStarted","Data":"a055e6c84b42602bbad22ba8a040f7806f317bfa6082dd3e5fe0cab3bd88a1aa"} Dec 02 14:32:04 crc kubenswrapper[4814]: I1202 14:32:04.347641 4814 generic.go:334] "Generic (PLEG): container finished" podID="f878b86f-db68-4fe8-ae1d-74a9edd15eff" containerID="3d4f025b7522a1979647a6a10fc71da2d83251c8597d397f11141911140a2b29" exitCode=0 Dec 02 14:32:04 crc kubenswrapper[4814]: I1202 14:32:04.347752 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lvn9c" event={"ID":"f878b86f-db68-4fe8-ae1d-74a9edd15eff","Type":"ContainerDied","Data":"3d4f025b7522a1979647a6a10fc71da2d83251c8597d397f11141911140a2b29"} Dec 02 14:32:04 crc kubenswrapper[4814]: I1202 14:32:04.355693 4814 generic.go:334] "Generic (PLEG): container finished" podID="638fc62a-4504-4762-bbaa-e7a6a6dfbca0" containerID="f8660bf30e75e4ae6ed126be6512c6c6390b2942d0d0f3f4de11b7998728a3c2" exitCode=0 Dec 02 14:32:04 crc kubenswrapper[4814]: I1202 14:32:04.355799 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxpmb" event={"ID":"638fc62a-4504-4762-bbaa-e7a6a6dfbca0","Type":"ContainerDied","Data":"f8660bf30e75e4ae6ed126be6512c6c6390b2942d0d0f3f4de11b7998728a3c2"} Dec 02 14:32:04 crc kubenswrapper[4814]: I1202 14:32:04.359882 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" event={"ID":"2ad0a031-3e26-4596-a7a9-cd00a95b7650","Type":"ContainerStarted","Data":"150be29cac38c7c576d2849c2ccf6b1f22cead2bb65e76e865fe3d29092bc67b"} Dec 02 14:32:04 crc kubenswrapper[4814]: I1202 14:32:04.363274 4814 generic.go:334] "Generic (PLEG): container finished" podID="5879c1b7-4967-4059-9443-7821ccc92d59" containerID="0c0be18a8e1e02e9ecdb5d3f781e4291dce2fa49017865f43fe7c62c9025f325" exitCode=0 Dec 02 14:32:04 crc kubenswrapper[4814]: I1202 14:32:04.363352 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pw69" event={"ID":"5879c1b7-4967-4059-9443-7821ccc92d59","Type":"ContainerDied","Data":"0c0be18a8e1e02e9ecdb5d3f781e4291dce2fa49017865f43fe7c62c9025f325"} Dec 02 14:32:04 crc kubenswrapper[4814]: I1202 14:32:04.376430 4814 generic.go:334] "Generic (PLEG): container finished" podID="4d9788e9-39e1-4f60-be8a-891c0c224019" containerID="fcf54beb0937de0aeaf94f8e1d390114c97650dba327ee58c35f4b34c4e932d2" exitCode=0 Dec 02 14:32:04 crc kubenswrapper[4814]: I1202 14:32:04.378096 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcv28" event={"ID":"4d9788e9-39e1-4f60-be8a-891c0c224019","Type":"ContainerDied","Data":"fcf54beb0937de0aeaf94f8e1d390114c97650dba327ee58c35f4b34c4e932d2"} Dec 02 14:32:04 crc kubenswrapper[4814]: E1202 14:32:04.382290 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v44dt" podUID="70239f99-6b21-408d-9506-f1b9afbafd90" Dec 02 14:32:04 crc kubenswrapper[4814]: E1202 14:32:04.382891 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7xfzc" podUID="f714bdcc-9476-4ed4-a164-5e6e5657e0a3" Dec 02 14:32:04 crc kubenswrapper[4814]: E1202 14:32:04.385407 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lhgkx" podUID="4388da57-5087-4bb1-8ceb-55603a4ddf9b" Dec 02 14:32:04 crc kubenswrapper[4814]: I1202 14:32:04.478359 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 14:32:05 crc kubenswrapper[4814]: I1202 14:32:05.391629 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" event={"ID":"2ad0a031-3e26-4596-a7a9-cd00a95b7650","Type":"ContainerStarted","Data":"0672e7c51895bad9673300418ef9b0a073fbb5b69ec318de0d13fa371470f378"} Dec 02 14:32:05 crc kubenswrapper[4814]: I1202 14:32:05.392151 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5n4wm" event={"ID":"2ad0a031-3e26-4596-a7a9-cd00a95b7650","Type":"ContainerStarted","Data":"9aff6c838f6a0b8ae14d1172a02a6aeac65de65948b79615ff7bf3ad51033b01"} Dec 02 14:32:05 crc kubenswrapper[4814]: I1202 14:32:05.395295 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pw69" event={"ID":"5879c1b7-4967-4059-9443-7821ccc92d59","Type":"ContainerStarted","Data":"7550c6887cfb1ee88177c89a13077a767a391191cc5ede7dd61f91ed2e70ea6e"} Dec 02 14:32:05 crc kubenswrapper[4814]: I1202 14:32:05.400884 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcv28" event={"ID":"4d9788e9-39e1-4f60-be8a-891c0c224019","Type":"ContainerStarted","Data":"4bae04dc07a69137e551dc9e56ce95afcd64323a208371e6eeb46876a2aeb954"} Dec 02 14:32:05 crc kubenswrapper[4814]: I1202 14:32:05.404349 4814 generic.go:334] "Generic (PLEG): container finished" podID="02b51452-43ab-409e-b220-9032343f5f9c" containerID="a055e6c84b42602bbad22ba8a040f7806f317bfa6082dd3e5fe0cab3bd88a1aa" exitCode=0 Dec 02 14:32:05 crc kubenswrapper[4814]: I1202 14:32:05.404560 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc9fk" event={"ID":"02b51452-43ab-409e-b220-9032343f5f9c","Type":"ContainerDied","Data":"a055e6c84b42602bbad22ba8a040f7806f317bfa6082dd3e5fe0cab3bd88a1aa"} Dec 02 14:32:05 crc kubenswrapper[4814]: I1202 14:32:05.418214 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5n4wm" podStartSLOduration=164.418187677 podStartE2EDuration="2m44.418187677s" podCreationTimestamp="2025-12-02 14:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:32:05.415422436 +0000 UTC m=+187.539784640" watchObservedRunningTime="2025-12-02 14:32:05.418187677 +0000 UTC m=+187.542549881" Dec 02 14:32:05 crc kubenswrapper[4814]: I1202 14:32:05.424053 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lvn9c" event={"ID":"f878b86f-db68-4fe8-ae1d-74a9edd15eff","Type":"ContainerStarted","Data":"9ab1fb29a9f812e613de25806a347dde553b1d88958978d63731b79131b71a82"} Dec 02 14:32:05 crc kubenswrapper[4814]: I1202 14:32:05.426761 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxpmb" event={"ID":"638fc62a-4504-4762-bbaa-e7a6a6dfbca0","Type":"ContainerStarted","Data":"a7e741870bc8a19bb82ad85146755d63cbd02b80121c09df3e5bf873f3ca520e"} Dec 02 14:32:05 crc kubenswrapper[4814]: I1202 14:32:05.470711 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lcv28" podStartSLOduration=2.612054982 podStartE2EDuration="35.470683814s" podCreationTimestamp="2025-12-02 14:31:30 +0000 UTC" firstStartedPulling="2025-12-02 14:31:32.260560103 +0000 UTC m=+154.384922307" lastFinishedPulling="2025-12-02 14:32:05.119188935 +0000 UTC m=+187.243551139" observedRunningTime="2025-12-02 14:32:05.466469096 +0000 UTC m=+187.590831320" watchObservedRunningTime="2025-12-02 14:32:05.470683814 +0000 UTC m=+187.595046018" Dec 02 14:32:05 crc kubenswrapper[4814]: I1202 14:32:05.488953 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5pw69" podStartSLOduration=2.444017431 podStartE2EDuration="36.488922582s" podCreationTimestamp="2025-12-02 14:31:29 +0000 UTC" firstStartedPulling="2025-12-02 14:31:31.106943892 +0000 UTC m=+153.231306096" lastFinishedPulling="2025-12-02 14:32:05.151849043 +0000 UTC m=+187.276211247" observedRunningTime="2025-12-02 14:32:05.484584041 +0000 UTC m=+187.608946255" watchObservedRunningTime="2025-12-02 14:32:05.488922582 +0000 UTC m=+187.613284786" Dec 02 14:32:05 crc kubenswrapper[4814]: I1202 14:32:05.520368 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lvn9c" podStartSLOduration=2.453668468 podStartE2EDuration="36.520335818s" podCreationTimestamp="2025-12-02 14:31:29 +0000 UTC" firstStartedPulling="2025-12-02 14:31:31.122694926 +0000 UTC m=+153.247057120" lastFinishedPulling="2025-12-02 14:32:05.189362266 +0000 UTC m=+187.313724470" observedRunningTime="2025-12-02 14:32:05.508029303 +0000 UTC m=+187.632391517" watchObservedRunningTime="2025-12-02 14:32:05.520335818 +0000 UTC m=+187.644698022" Dec 02 14:32:05 crc kubenswrapper[4814]: I1202 14:32:05.553334 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sxpmb" podStartSLOduration=2.735186942 podStartE2EDuration="36.553311434s" podCreationTimestamp="2025-12-02 14:31:29 +0000 UTC" firstStartedPulling="2025-12-02 14:31:31.170687348 +0000 UTC m=+153.295049552" lastFinishedPulling="2025-12-02 14:32:04.98881184 +0000 UTC m=+187.113174044" observedRunningTime="2025-12-02 14:32:05.552810152 +0000 UTC m=+187.677172356" watchObservedRunningTime="2025-12-02 14:32:05.553311434 +0000 UTC m=+187.677673638" Dec 02 14:32:06 crc kubenswrapper[4814]: I1202 14:32:06.434273 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc9fk" event={"ID":"02b51452-43ab-409e-b220-9032343f5f9c","Type":"ContainerStarted","Data":"711f6dfe495fd127ec3a0e07c3f30c24a650a978da7d798d55994a7f801c53e6"} Dec 02 14:32:06 crc kubenswrapper[4814]: I1202 14:32:06.456374 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bc9fk" podStartSLOduration=2.872613534 podStartE2EDuration="34.456351225s" podCreationTimestamp="2025-12-02 14:31:32 +0000 UTC" firstStartedPulling="2025-12-02 14:31:34.432061941 +0000 UTC m=+156.556424145" lastFinishedPulling="2025-12-02 14:32:06.015799632 +0000 UTC m=+188.140161836" observedRunningTime="2025-12-02 14:32:06.452922457 +0000 UTC m=+188.577284661" watchObservedRunningTime="2025-12-02 14:32:06.456351225 +0000 UTC m=+188.580713429" Dec 02 14:32:06 crc kubenswrapper[4814]: I1202 14:32:06.881023 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hjjzp"] Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.135862 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 14:32:08 crc kubenswrapper[4814]: E1202 14:32:08.136561 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a580658-b4ec-48be-bebd-adc81c7bac56" containerName="pruner" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.136580 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a580658-b4ec-48be-bebd-adc81c7bac56" containerName="pruner" Dec 02 14:32:08 crc kubenswrapper[4814]: E1202 14:32:08.136609 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a2a03e-98ed-43a5-981a-2c1bcafb1368" containerName="pruner" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.136617 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a2a03e-98ed-43a5-981a-2c1bcafb1368" containerName="pruner" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.136770 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a2a03e-98ed-43a5-981a-2c1bcafb1368" containerName="pruner" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.136790 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a580658-b4ec-48be-bebd-adc81c7bac56" containerName="pruner" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.137453 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.143202 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.143621 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.156090 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.243966 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/803495b1-7f29-420d-8465-ede6fc4dd66d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"803495b1-7f29-420d-8465-ede6fc4dd66d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.244047 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/803495b1-7f29-420d-8465-ede6fc4dd66d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"803495b1-7f29-420d-8465-ede6fc4dd66d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.344780 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/803495b1-7f29-420d-8465-ede6fc4dd66d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"803495b1-7f29-420d-8465-ede6fc4dd66d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.344853 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/803495b1-7f29-420d-8465-ede6fc4dd66d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"803495b1-7f29-420d-8465-ede6fc4dd66d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.344940 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/803495b1-7f29-420d-8465-ede6fc4dd66d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"803495b1-7f29-420d-8465-ede6fc4dd66d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.368921 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/803495b1-7f29-420d-8465-ede6fc4dd66d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"803495b1-7f29-420d-8465-ede6fc4dd66d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.465986 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 14:32:08 crc kubenswrapper[4814]: I1202 14:32:08.909616 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 14:32:09 crc kubenswrapper[4814]: I1202 14:32:09.460175 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"803495b1-7f29-420d-8465-ede6fc4dd66d","Type":"ContainerStarted","Data":"6e7ba145b1c1bee0842646157da1392f994ea17b6ec2024a3894079797e0475c"} Dec 02 14:32:09 crc kubenswrapper[4814]: I1202 14:32:09.608922 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:32:09 crc kubenswrapper[4814]: I1202 14:32:09.609068 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:32:09 crc kubenswrapper[4814]: I1202 14:32:09.677370 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:32:09 crc kubenswrapper[4814]: I1202 14:32:09.777395 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:32:09 crc kubenswrapper[4814]: I1202 14:32:09.777489 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:32:09 crc kubenswrapper[4814]: I1202 14:32:09.821631 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:32:10 crc kubenswrapper[4814]: I1202 14:32:10.100878 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:32:10 crc kubenswrapper[4814]: I1202 14:32:10.100971 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:32:10 crc kubenswrapper[4814]: I1202 14:32:10.151995 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:32:10 crc kubenswrapper[4814]: I1202 14:32:10.470212 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"803495b1-7f29-420d-8465-ede6fc4dd66d","Type":"ContainerStarted","Data":"7b339af68b0002976770a5d0b201602bea4ff318ef30fa645316697a3faaf663"} Dec 02 14:32:10 crc kubenswrapper[4814]: I1202 14:32:10.486804 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.486772272 podStartE2EDuration="2.486772272s" podCreationTimestamp="2025-12-02 14:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:32:10.486292309 +0000 UTC m=+192.610654513" watchObservedRunningTime="2025-12-02 14:32:10.486772272 +0000 UTC m=+192.611134476" Dec 02 14:32:10 crc kubenswrapper[4814]: I1202 14:32:10.532815 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:32:10 crc kubenswrapper[4814]: I1202 14:32:10.544313 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:32:10 crc kubenswrapper[4814]: I1202 14:32:10.547635 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:32:11 crc kubenswrapper[4814]: I1202 14:32:11.213022 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:32:11 crc kubenswrapper[4814]: I1202 14:32:11.213111 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:32:11 crc kubenswrapper[4814]: I1202 14:32:11.267801 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:32:11 crc kubenswrapper[4814]: I1202 14:32:11.478003 4814 generic.go:334] "Generic (PLEG): container finished" podID="803495b1-7f29-420d-8465-ede6fc4dd66d" containerID="7b339af68b0002976770a5d0b201602bea4ff318ef30fa645316697a3faaf663" exitCode=0 Dec 02 14:32:11 crc kubenswrapper[4814]: I1202 14:32:11.478477 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"803495b1-7f29-420d-8465-ede6fc4dd66d","Type":"ContainerDied","Data":"7b339af68b0002976770a5d0b201602bea4ff318ef30fa645316697a3faaf663"} Dec 02 14:32:11 crc kubenswrapper[4814]: I1202 14:32:11.546481 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:32:12 crc kubenswrapper[4814]: I1202 14:32:12.390905 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5pw69"] Dec 02 14:32:12 crc kubenswrapper[4814]: I1202 14:32:12.484840 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5pw69" podUID="5879c1b7-4967-4059-9443-7821ccc92d59" containerName="registry-server" containerID="cri-o://7550c6887cfb1ee88177c89a13077a767a391191cc5ede7dd61f91ed2e70ea6e" gracePeriod=2 Dec 02 14:32:12 crc kubenswrapper[4814]: I1202 14:32:12.766385 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:32:12 crc kubenswrapper[4814]: I1202 14:32:12.767323 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:32:12 crc kubenswrapper[4814]: I1202 14:32:12.816983 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:32:12 crc kubenswrapper[4814]: I1202 14:32:12.880996 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 14:32:12 crc kubenswrapper[4814]: I1202 14:32:12.992699 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lvn9c"] Dec 02 14:32:12 crc kubenswrapper[4814]: I1202 14:32:12.993013 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lvn9c" podUID="f878b86f-db68-4fe8-ae1d-74a9edd15eff" containerName="registry-server" containerID="cri-o://9ab1fb29a9f812e613de25806a347dde553b1d88958978d63731b79131b71a82" gracePeriod=2 Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.026634 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/803495b1-7f29-420d-8465-ede6fc4dd66d-kube-api-access\") pod \"803495b1-7f29-420d-8465-ede6fc4dd66d\" (UID: \"803495b1-7f29-420d-8465-ede6fc4dd66d\") " Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.026716 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/803495b1-7f29-420d-8465-ede6fc4dd66d-kubelet-dir\") pod \"803495b1-7f29-420d-8465-ede6fc4dd66d\" (UID: \"803495b1-7f29-420d-8465-ede6fc4dd66d\") " Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.027144 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/803495b1-7f29-420d-8465-ede6fc4dd66d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "803495b1-7f29-420d-8465-ede6fc4dd66d" (UID: "803495b1-7f29-420d-8465-ede6fc4dd66d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.028427 4814 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/803495b1-7f29-420d-8465-ede6fc4dd66d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.038173 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/803495b1-7f29-420d-8465-ede6fc4dd66d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "803495b1-7f29-420d-8465-ede6fc4dd66d" (UID: "803495b1-7f29-420d-8465-ede6fc4dd66d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.130612 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/803495b1-7f29-420d-8465-ede6fc4dd66d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.480395 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.491745 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"803495b1-7f29-420d-8465-ede6fc4dd66d","Type":"ContainerDied","Data":"6e7ba145b1c1bee0842646157da1392f994ea17b6ec2024a3894079797e0475c"} Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.492207 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e7ba145b1c1bee0842646157da1392f994ea17b6ec2024a3894079797e0475c" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.491792 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.498010 4814 generic.go:334] "Generic (PLEG): container finished" podID="f878b86f-db68-4fe8-ae1d-74a9edd15eff" containerID="9ab1fb29a9f812e613de25806a347dde553b1d88958978d63731b79131b71a82" exitCode=0 Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.498059 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lvn9c" event={"ID":"f878b86f-db68-4fe8-ae1d-74a9edd15eff","Type":"ContainerDied","Data":"9ab1fb29a9f812e613de25806a347dde553b1d88958978d63731b79131b71a82"} Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.505333 4814 generic.go:334] "Generic (PLEG): container finished" podID="5879c1b7-4967-4059-9443-7821ccc92d59" containerID="7550c6887cfb1ee88177c89a13077a767a391191cc5ede7dd61f91ed2e70ea6e" exitCode=0 Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.508453 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pw69" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.508875 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pw69" event={"ID":"5879c1b7-4967-4059-9443-7821ccc92d59","Type":"ContainerDied","Data":"7550c6887cfb1ee88177c89a13077a767a391191cc5ede7dd61f91ed2e70ea6e"} Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.509013 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pw69" event={"ID":"5879c1b7-4967-4059-9443-7821ccc92d59","Type":"ContainerDied","Data":"2e9186dbd36b978ed51f6d49be0df1770d9faf9fa7b71d56ef872c40a1aeaf05"} Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.509192 4814 scope.go:117] "RemoveContainer" containerID="7550c6887cfb1ee88177c89a13077a767a391191cc5ede7dd61f91ed2e70ea6e" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.547582 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.570475 4814 scope.go:117] "RemoveContainer" containerID="0c0be18a8e1e02e9ecdb5d3f781e4291dce2fa49017865f43fe7c62c9025f325" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.576359 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.591723 4814 scope.go:117] "RemoveContainer" containerID="4bc8ccea31ee2269298ee62a4a2e7fcf6522a045d637017529cf9c8e2404daab" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.628796 4814 scope.go:117] "RemoveContainer" containerID="7550c6887cfb1ee88177c89a13077a767a391191cc5ede7dd61f91ed2e70ea6e" Dec 02 14:32:13 crc kubenswrapper[4814]: E1202 14:32:13.629372 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7550c6887cfb1ee88177c89a13077a767a391191cc5ede7dd61f91ed2e70ea6e\": container with ID starting with 7550c6887cfb1ee88177c89a13077a767a391191cc5ede7dd61f91ed2e70ea6e not found: ID does not exist" containerID="7550c6887cfb1ee88177c89a13077a767a391191cc5ede7dd61f91ed2e70ea6e" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.629449 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7550c6887cfb1ee88177c89a13077a767a391191cc5ede7dd61f91ed2e70ea6e"} err="failed to get container status \"7550c6887cfb1ee88177c89a13077a767a391191cc5ede7dd61f91ed2e70ea6e\": rpc error: code = NotFound desc = could not find container \"7550c6887cfb1ee88177c89a13077a767a391191cc5ede7dd61f91ed2e70ea6e\": container with ID starting with 7550c6887cfb1ee88177c89a13077a767a391191cc5ede7dd61f91ed2e70ea6e not found: ID does not exist" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.629523 4814 scope.go:117] "RemoveContainer" containerID="0c0be18a8e1e02e9ecdb5d3f781e4291dce2fa49017865f43fe7c62c9025f325" Dec 02 14:32:13 crc kubenswrapper[4814]: E1202 14:32:13.630064 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c0be18a8e1e02e9ecdb5d3f781e4291dce2fa49017865f43fe7c62c9025f325\": container with ID starting with 0c0be18a8e1e02e9ecdb5d3f781e4291dce2fa49017865f43fe7c62c9025f325 not found: ID does not exist" containerID="0c0be18a8e1e02e9ecdb5d3f781e4291dce2fa49017865f43fe7c62c9025f325" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.630110 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c0be18a8e1e02e9ecdb5d3f781e4291dce2fa49017865f43fe7c62c9025f325"} err="failed to get container status \"0c0be18a8e1e02e9ecdb5d3f781e4291dce2fa49017865f43fe7c62c9025f325\": rpc error: code = NotFound desc = could not find container \"0c0be18a8e1e02e9ecdb5d3f781e4291dce2fa49017865f43fe7c62c9025f325\": container with ID starting with 0c0be18a8e1e02e9ecdb5d3f781e4291dce2fa49017865f43fe7c62c9025f325 not found: ID does not exist" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.630147 4814 scope.go:117] "RemoveContainer" containerID="4bc8ccea31ee2269298ee62a4a2e7fcf6522a045d637017529cf9c8e2404daab" Dec 02 14:32:13 crc kubenswrapper[4814]: E1202 14:32:13.630479 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc8ccea31ee2269298ee62a4a2e7fcf6522a045d637017529cf9c8e2404daab\": container with ID starting with 4bc8ccea31ee2269298ee62a4a2e7fcf6522a045d637017529cf9c8e2404daab not found: ID does not exist" containerID="4bc8ccea31ee2269298ee62a4a2e7fcf6522a045d637017529cf9c8e2404daab" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.630511 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc8ccea31ee2269298ee62a4a2e7fcf6522a045d637017529cf9c8e2404daab"} err="failed to get container status \"4bc8ccea31ee2269298ee62a4a2e7fcf6522a045d637017529cf9c8e2404daab\": rpc error: code = NotFound desc = could not find container \"4bc8ccea31ee2269298ee62a4a2e7fcf6522a045d637017529cf9c8e2404daab\": container with ID starting with 4bc8ccea31ee2269298ee62a4a2e7fcf6522a045d637017529cf9c8e2404daab not found: ID does not exist" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.642349 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5879c1b7-4967-4059-9443-7821ccc92d59-catalog-content\") pod \"5879c1b7-4967-4059-9443-7821ccc92d59\" (UID: \"5879c1b7-4967-4059-9443-7821ccc92d59\") " Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.642400 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zb9l\" (UniqueName: \"kubernetes.io/projected/f878b86f-db68-4fe8-ae1d-74a9edd15eff-kube-api-access-5zb9l\") pod \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\" (UID: \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\") " Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.642431 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5879c1b7-4967-4059-9443-7821ccc92d59-utilities\") pod \"5879c1b7-4967-4059-9443-7821ccc92d59\" (UID: \"5879c1b7-4967-4059-9443-7821ccc92d59\") " Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.642460 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4qxg\" (UniqueName: \"kubernetes.io/projected/5879c1b7-4967-4059-9443-7821ccc92d59-kube-api-access-m4qxg\") pod \"5879c1b7-4967-4059-9443-7821ccc92d59\" (UID: \"5879c1b7-4967-4059-9443-7821ccc92d59\") " Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.642580 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f878b86f-db68-4fe8-ae1d-74a9edd15eff-utilities\") pod \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\" (UID: \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\") " Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.642615 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f878b86f-db68-4fe8-ae1d-74a9edd15eff-catalog-content\") pod \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\" (UID: \"f878b86f-db68-4fe8-ae1d-74a9edd15eff\") " Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.643194 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5879c1b7-4967-4059-9443-7821ccc92d59-utilities" (OuterVolumeSpecName: "utilities") pod "5879c1b7-4967-4059-9443-7821ccc92d59" (UID: "5879c1b7-4967-4059-9443-7821ccc92d59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.643651 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f878b86f-db68-4fe8-ae1d-74a9edd15eff-utilities" (OuterVolumeSpecName: "utilities") pod "f878b86f-db68-4fe8-ae1d-74a9edd15eff" (UID: "f878b86f-db68-4fe8-ae1d-74a9edd15eff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.646775 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5879c1b7-4967-4059-9443-7821ccc92d59-kube-api-access-m4qxg" (OuterVolumeSpecName: "kube-api-access-m4qxg") pod "5879c1b7-4967-4059-9443-7821ccc92d59" (UID: "5879c1b7-4967-4059-9443-7821ccc92d59"). InnerVolumeSpecName "kube-api-access-m4qxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.647635 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f878b86f-db68-4fe8-ae1d-74a9edd15eff-kube-api-access-5zb9l" (OuterVolumeSpecName: "kube-api-access-5zb9l") pod "f878b86f-db68-4fe8-ae1d-74a9edd15eff" (UID: "f878b86f-db68-4fe8-ae1d-74a9edd15eff"). InnerVolumeSpecName "kube-api-access-5zb9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.712350 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5879c1b7-4967-4059-9443-7821ccc92d59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5879c1b7-4967-4059-9443-7821ccc92d59" (UID: "5879c1b7-4967-4059-9443-7821ccc92d59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.713714 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f878b86f-db68-4fe8-ae1d-74a9edd15eff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f878b86f-db68-4fe8-ae1d-74a9edd15eff" (UID: "f878b86f-db68-4fe8-ae1d-74a9edd15eff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.744498 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f878b86f-db68-4fe8-ae1d-74a9edd15eff-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.744540 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f878b86f-db68-4fe8-ae1d-74a9edd15eff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.744553 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zb9l\" (UniqueName: \"kubernetes.io/projected/f878b86f-db68-4fe8-ae1d-74a9edd15eff-kube-api-access-5zb9l\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.744564 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5879c1b7-4967-4059-9443-7821ccc92d59-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.744572 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5879c1b7-4967-4059-9443-7821ccc92d59-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.744585 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4qxg\" (UniqueName: \"kubernetes.io/projected/5879c1b7-4967-4059-9443-7821ccc92d59-kube-api-access-m4qxg\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.843754 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5pw69"] Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.848284 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5pw69"] Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.931222 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 14:32:13 crc kubenswrapper[4814]: E1202 14:32:13.931843 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f878b86f-db68-4fe8-ae1d-74a9edd15eff" containerName="registry-server" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.931955 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f878b86f-db68-4fe8-ae1d-74a9edd15eff" containerName="registry-server" Dec 02 14:32:13 crc kubenswrapper[4814]: E1202 14:32:13.932023 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5879c1b7-4967-4059-9443-7821ccc92d59" containerName="extract-content" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.932097 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="5879c1b7-4967-4059-9443-7821ccc92d59" containerName="extract-content" Dec 02 14:32:13 crc kubenswrapper[4814]: E1202 14:32:13.932174 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5879c1b7-4967-4059-9443-7821ccc92d59" containerName="extract-utilities" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.932249 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="5879c1b7-4967-4059-9443-7821ccc92d59" containerName="extract-utilities" Dec 02 14:32:13 crc kubenswrapper[4814]: E1202 14:32:13.933582 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803495b1-7f29-420d-8465-ede6fc4dd66d" containerName="pruner" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.933669 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="803495b1-7f29-420d-8465-ede6fc4dd66d" containerName="pruner" Dec 02 14:32:13 crc kubenswrapper[4814]: E1202 14:32:13.933732 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f878b86f-db68-4fe8-ae1d-74a9edd15eff" containerName="extract-content" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.933796 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f878b86f-db68-4fe8-ae1d-74a9edd15eff" containerName="extract-content" Dec 02 14:32:13 crc kubenswrapper[4814]: E1202 14:32:13.933858 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f878b86f-db68-4fe8-ae1d-74a9edd15eff" containerName="extract-utilities" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.933910 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f878b86f-db68-4fe8-ae1d-74a9edd15eff" containerName="extract-utilities" Dec 02 14:32:13 crc kubenswrapper[4814]: E1202 14:32:13.933964 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5879c1b7-4967-4059-9443-7821ccc92d59" containerName="registry-server" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.934037 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="5879c1b7-4967-4059-9443-7821ccc92d59" containerName="registry-server" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.934250 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="803495b1-7f29-420d-8465-ede6fc4dd66d" containerName="pruner" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.934330 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f878b86f-db68-4fe8-ae1d-74a9edd15eff" containerName="registry-server" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.934402 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="5879c1b7-4967-4059-9443-7821ccc92d59" containerName="registry-server" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.934899 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.936977 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.937931 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 14:32:13 crc kubenswrapper[4814]: I1202 14:32:13.944119 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.049088 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-kubelet-dir\") pod \"installer-9-crc\" (UID: \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.049145 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-var-lock\") pod \"installer-9-crc\" (UID: \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.049176 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-kube-api-access\") pod \"installer-9-crc\" (UID: \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.150221 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-kubelet-dir\") pod \"installer-9-crc\" (UID: \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.150302 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-var-lock\") pod \"installer-9-crc\" (UID: \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.150329 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-kube-api-access\") pod \"installer-9-crc\" (UID: \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.150424 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-var-lock\") pod \"installer-9-crc\" (UID: \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.150424 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-kubelet-dir\") pod \"installer-9-crc\" (UID: \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.152258 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5879c1b7-4967-4059-9443-7821ccc92d59" path="/var/lib/kubelet/pods/5879c1b7-4967-4059-9443-7821ccc92d59/volumes" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.178913 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-kube-api-access\") pod \"installer-9-crc\" (UID: \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.299872 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.525481 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lvn9c" event={"ID":"f878b86f-db68-4fe8-ae1d-74a9edd15eff","Type":"ContainerDied","Data":"438e3848b9008f02ced1f5bc38290f6fc3103cd931d1a18a9f14b9ef98b14e67"} Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.525963 4814 scope.go:117] "RemoveContainer" containerID="9ab1fb29a9f812e613de25806a347dde553b1d88958978d63731b79131b71a82" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.526084 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lvn9c" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.546895 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lvn9c"] Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.550531 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lvn9c"] Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.558486 4814 scope.go:117] "RemoveContainer" containerID="3d4f025b7522a1979647a6a10fc71da2d83251c8597d397f11141911140a2b29" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.578819 4814 scope.go:117] "RemoveContainer" containerID="d58517598000080edc2b284b0bb4fd3a3660c6ca31b9663f13eb1cff1f760391" Dec 02 14:32:14 crc kubenswrapper[4814]: I1202 14:32:14.709436 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 14:32:14 crc kubenswrapper[4814]: W1202 14:32:14.718285 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaddefb6b_3fbe_4472_81cf_2e8ade2d4bfd.slice/crio-23e878db16961553277cb0cbd5a59c9913ee678dbd48210546b4146e807ddfad WatchSource:0}: Error finding container 23e878db16961553277cb0cbd5a59c9913ee678dbd48210546b4146e807ddfad: Status 404 returned error can't find the container with id 23e878db16961553277cb0cbd5a59c9913ee678dbd48210546b4146e807ddfad Dec 02 14:32:15 crc kubenswrapper[4814]: I1202 14:32:15.537908 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd","Type":"ContainerStarted","Data":"23e878db16961553277cb0cbd5a59c9913ee678dbd48210546b4146e807ddfad"} Dec 02 14:32:16 crc kubenswrapper[4814]: I1202 14:32:16.153556 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f878b86f-db68-4fe8-ae1d-74a9edd15eff" path="/var/lib/kubelet/pods/f878b86f-db68-4fe8-ae1d-74a9edd15eff/volumes" Dec 02 14:32:16 crc kubenswrapper[4814]: I1202 14:32:16.544906 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd","Type":"ContainerStarted","Data":"b97aa824654f086e6eee1a963ac25c24c0adbeacac51a8d6a2e2bae5ddc4ca83"} Dec 02 14:32:16 crc kubenswrapper[4814]: I1202 14:32:16.559457 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.559435612 podStartE2EDuration="3.559435612s" podCreationTimestamp="2025-12-02 14:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:32:16.558515488 +0000 UTC m=+198.682877692" watchObservedRunningTime="2025-12-02 14:32:16.559435612 +0000 UTC m=+198.683797826" Dec 02 14:32:16 crc kubenswrapper[4814]: I1202 14:32:16.789193 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bc9fk"] Dec 02 14:32:16 crc kubenswrapper[4814]: I1202 14:32:16.789477 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bc9fk" podUID="02b51452-43ab-409e-b220-9032343f5f9c" containerName="registry-server" containerID="cri-o://711f6dfe495fd127ec3a0e07c3f30c24a650a978da7d798d55994a7f801c53e6" gracePeriod=2 Dec 02 14:32:17 crc kubenswrapper[4814]: I1202 14:32:17.553174 4814 generic.go:334] "Generic (PLEG): container finished" podID="02b51452-43ab-409e-b220-9032343f5f9c" containerID="711f6dfe495fd127ec3a0e07c3f30c24a650a978da7d798d55994a7f801c53e6" exitCode=0 Dec 02 14:32:17 crc kubenswrapper[4814]: I1202 14:32:17.553250 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc9fk" event={"ID":"02b51452-43ab-409e-b220-9032343f5f9c","Type":"ContainerDied","Data":"711f6dfe495fd127ec3a0e07c3f30c24a650a978da7d798d55994a7f801c53e6"} Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.297419 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.421580 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b51452-43ab-409e-b220-9032343f5f9c-catalog-content\") pod \"02b51452-43ab-409e-b220-9032343f5f9c\" (UID: \"02b51452-43ab-409e-b220-9032343f5f9c\") " Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.421717 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfmmw\" (UniqueName: \"kubernetes.io/projected/02b51452-43ab-409e-b220-9032343f5f9c-kube-api-access-cfmmw\") pod \"02b51452-43ab-409e-b220-9032343f5f9c\" (UID: \"02b51452-43ab-409e-b220-9032343f5f9c\") " Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.421762 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b51452-43ab-409e-b220-9032343f5f9c-utilities\") pod \"02b51452-43ab-409e-b220-9032343f5f9c\" (UID: \"02b51452-43ab-409e-b220-9032343f5f9c\") " Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.422988 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b51452-43ab-409e-b220-9032343f5f9c-utilities" (OuterVolumeSpecName: "utilities") pod "02b51452-43ab-409e-b220-9032343f5f9c" (UID: "02b51452-43ab-409e-b220-9032343f5f9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.432617 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b51452-43ab-409e-b220-9032343f5f9c-kube-api-access-cfmmw" (OuterVolumeSpecName: "kube-api-access-cfmmw") pod "02b51452-43ab-409e-b220-9032343f5f9c" (UID: "02b51452-43ab-409e-b220-9032343f5f9c"). InnerVolumeSpecName "kube-api-access-cfmmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.524038 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfmmw\" (UniqueName: \"kubernetes.io/projected/02b51452-43ab-409e-b220-9032343f5f9c-kube-api-access-cfmmw\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.524105 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b51452-43ab-409e-b220-9032343f5f9c-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.530774 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b51452-43ab-409e-b220-9032343f5f9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02b51452-43ab-409e-b220-9032343f5f9c" (UID: "02b51452-43ab-409e-b220-9032343f5f9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.564183 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc9fk" event={"ID":"02b51452-43ab-409e-b220-9032343f5f9c","Type":"ContainerDied","Data":"1129a5eea10591e354c430c908e1cf49ee85a65b83832d0cf201182ab9d47a98"} Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.564305 4814 scope.go:117] "RemoveContainer" containerID="711f6dfe495fd127ec3a0e07c3f30c24a650a978da7d798d55994a7f801c53e6" Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.565038 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bc9fk" Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.583871 4814 scope.go:117] "RemoveContainer" containerID="a055e6c84b42602bbad22ba8a040f7806f317bfa6082dd3e5fe0cab3bd88a1aa" Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.602732 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bc9fk"] Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.607335 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bc9fk"] Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.620714 4814 scope.go:117] "RemoveContainer" containerID="5977e35d43b2ddce9bf7b0e4e00f7debf61f31adc4896551c9480f2d9be03461" Dec 02 14:32:18 crc kubenswrapper[4814]: I1202 14:32:18.625593 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b51452-43ab-409e-b220-9032343f5f9c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:19 crc kubenswrapper[4814]: I1202 14:32:19.573819 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v44dt" event={"ID":"70239f99-6b21-408d-9506-f1b9afbafd90","Type":"ContainerStarted","Data":"f67386d6142dbeed5bfd0d0ba5462c2f4ebacf4839d79a79272dc10ddae79b7e"} Dec 02 14:32:20 crc kubenswrapper[4814]: I1202 14:32:20.155999 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b51452-43ab-409e-b220-9032343f5f9c" path="/var/lib/kubelet/pods/02b51452-43ab-409e-b220-9032343f5f9c/volumes" Dec 02 14:32:20 crc kubenswrapper[4814]: I1202 14:32:20.583192 4814 generic.go:334] "Generic (PLEG): container finished" podID="70239f99-6b21-408d-9506-f1b9afbafd90" containerID="f67386d6142dbeed5bfd0d0ba5462c2f4ebacf4839d79a79272dc10ddae79b7e" exitCode=0 Dec 02 14:32:20 crc kubenswrapper[4814]: I1202 14:32:20.583275 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v44dt" event={"ID":"70239f99-6b21-408d-9506-f1b9afbafd90","Type":"ContainerDied","Data":"f67386d6142dbeed5bfd0d0ba5462c2f4ebacf4839d79a79272dc10ddae79b7e"} Dec 02 14:32:20 crc kubenswrapper[4814]: I1202 14:32:20.585804 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xfzc" event={"ID":"f714bdcc-9476-4ed4-a164-5e6e5657e0a3","Type":"ContainerStarted","Data":"31e15d99dd35e34e2d0dee7415fb391571dff637269f066be76a4f4df483c30e"} Dec 02 14:32:20 crc kubenswrapper[4814]: I1202 14:32:20.651380 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:32:20 crc kubenswrapper[4814]: I1202 14:32:20.651470 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:32:21 crc kubenswrapper[4814]: I1202 14:32:21.593462 4814 generic.go:334] "Generic (PLEG): container finished" podID="f714bdcc-9476-4ed4-a164-5e6e5657e0a3" containerID="31e15d99dd35e34e2d0dee7415fb391571dff637269f066be76a4f4df483c30e" exitCode=0 Dec 02 14:32:21 crc kubenswrapper[4814]: I1202 14:32:21.593522 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xfzc" event={"ID":"f714bdcc-9476-4ed4-a164-5e6e5657e0a3","Type":"ContainerDied","Data":"31e15d99dd35e34e2d0dee7415fb391571dff637269f066be76a4f4df483c30e"} Dec 02 14:32:23 crc kubenswrapper[4814]: I1202 14:32:23.607772 4814 generic.go:334] "Generic (PLEG): container finished" podID="4388da57-5087-4bb1-8ceb-55603a4ddf9b" containerID="259c79ad3148f5d6bf0717ec49335f1a9740f257618ef113d5c209693a8b35fd" exitCode=0 Dec 02 14:32:23 crc kubenswrapper[4814]: I1202 14:32:23.607972 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhgkx" event={"ID":"4388da57-5087-4bb1-8ceb-55603a4ddf9b","Type":"ContainerDied","Data":"259c79ad3148f5d6bf0717ec49335f1a9740f257618ef113d5c209693a8b35fd"} Dec 02 14:32:23 crc kubenswrapper[4814]: I1202 14:32:23.611339 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v44dt" event={"ID":"70239f99-6b21-408d-9506-f1b9afbafd90","Type":"ContainerStarted","Data":"58a8fc2af3606a18fc2ab9ffde74a7faaa7b54f9214ffc82a081c72f3a62412f"} Dec 02 14:32:23 crc kubenswrapper[4814]: I1202 14:32:23.621810 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xfzc" event={"ID":"f714bdcc-9476-4ed4-a164-5e6e5657e0a3","Type":"ContainerStarted","Data":"91d53366348b7a917ae0f2b194dcaa320c248fedbfcf1461519582a30e55a82e"} Dec 02 14:32:23 crc kubenswrapper[4814]: I1202 14:32:23.663144 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v44dt" podStartSLOduration=2.800963495 podStartE2EDuration="52.663115965s" podCreationTimestamp="2025-12-02 14:31:31 +0000 UTC" firstStartedPulling="2025-12-02 14:31:33.306608234 +0000 UTC m=+155.430970448" lastFinishedPulling="2025-12-02 14:32:23.168760714 +0000 UTC m=+205.293122918" observedRunningTime="2025-12-02 14:32:23.662001516 +0000 UTC m=+205.786363760" watchObservedRunningTime="2025-12-02 14:32:23.663115965 +0000 UTC m=+205.787478169" Dec 02 14:32:23 crc kubenswrapper[4814]: I1202 14:32:23.687081 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7xfzc" podStartSLOduration=3.676135183 podStartE2EDuration="55.687059643s" podCreationTimestamp="2025-12-02 14:31:28 +0000 UTC" firstStartedPulling="2025-12-02 14:31:31.113507161 +0000 UTC m=+153.237869365" lastFinishedPulling="2025-12-02 14:32:23.124431621 +0000 UTC m=+205.248793825" observedRunningTime="2025-12-02 14:32:23.684025845 +0000 UTC m=+205.808388049" watchObservedRunningTime="2025-12-02 14:32:23.687059643 +0000 UTC m=+205.811421857" Dec 02 14:32:24 crc kubenswrapper[4814]: I1202 14:32:24.629879 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhgkx" event={"ID":"4388da57-5087-4bb1-8ceb-55603a4ddf9b","Type":"ContainerStarted","Data":"f583378f006f922e34e2c6d74895318e98ed4c25b29ced73179408bbca3c5b31"} Dec 02 14:32:24 crc kubenswrapper[4814]: I1202 14:32:24.656524 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lhgkx" podStartSLOduration=1.737994397 podStartE2EDuration="53.656490546s" podCreationTimestamp="2025-12-02 14:31:31 +0000 UTC" firstStartedPulling="2025-12-02 14:31:32.260489071 +0000 UTC m=+154.384851275" lastFinishedPulling="2025-12-02 14:32:24.17898522 +0000 UTC m=+206.303347424" observedRunningTime="2025-12-02 14:32:24.652400111 +0000 UTC m=+206.776762405" watchObservedRunningTime="2025-12-02 14:32:24.656490546 +0000 UTC m=+206.780852760" Dec 02 14:32:29 crc kubenswrapper[4814]: I1202 14:32:29.160780 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:32:29 crc kubenswrapper[4814]: I1202 14:32:29.161582 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:32:29 crc kubenswrapper[4814]: I1202 14:32:29.213986 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:32:29 crc kubenswrapper[4814]: I1202 14:32:29.734928 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:32:31 crc kubenswrapper[4814]: I1202 14:32:31.536912 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:32:31 crc kubenswrapper[4814]: I1202 14:32:31.537482 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:32:31 crc kubenswrapper[4814]: I1202 14:32:31.611774 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:32:31 crc kubenswrapper[4814]: I1202 14:32:31.732200 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:32:31 crc kubenswrapper[4814]: I1202 14:32:31.927955 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" podUID="050c15f4-8e90-4961-bb14-eaa0aa868b8c" containerName="oauth-openshift" containerID="cri-o://e538bdd7d67e6f1a4d48394bcfad569fa18442b89a21874e0cf1d4ba546f0a36" gracePeriod=15 Dec 02 14:32:32 crc kubenswrapper[4814]: I1202 14:32:32.220858 4814 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-hjjzp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Dec 02 14:32:32 crc kubenswrapper[4814]: I1202 14:32:32.220941 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" podUID="050c15f4-8e90-4961-bb14-eaa0aa868b8c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Dec 02 14:32:32 crc kubenswrapper[4814]: I1202 14:32:32.351635 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:32:32 crc kubenswrapper[4814]: I1202 14:32:32.351724 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:32:32 crc kubenswrapper[4814]: I1202 14:32:32.417882 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:32:32 crc kubenswrapper[4814]: I1202 14:32:32.742403 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.575271 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.617004 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz"] Dec 02 14:32:33 crc kubenswrapper[4814]: E1202 14:32:33.617641 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b51452-43ab-409e-b220-9032343f5f9c" containerName="extract-content" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.617707 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b51452-43ab-409e-b220-9032343f5f9c" containerName="extract-content" Dec 02 14:32:33 crc kubenswrapper[4814]: E1202 14:32:33.617732 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050c15f4-8e90-4961-bb14-eaa0aa868b8c" containerName="oauth-openshift" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.617792 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="050c15f4-8e90-4961-bb14-eaa0aa868b8c" containerName="oauth-openshift" Dec 02 14:32:33 crc kubenswrapper[4814]: E1202 14:32:33.617811 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b51452-43ab-409e-b220-9032343f5f9c" containerName="extract-utilities" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.617820 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b51452-43ab-409e-b220-9032343f5f9c" containerName="extract-utilities" Dec 02 14:32:33 crc kubenswrapper[4814]: E1202 14:32:33.617906 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b51452-43ab-409e-b220-9032343f5f9c" containerName="registry-server" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.617942 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b51452-43ab-409e-b220-9032343f5f9c" containerName="registry-server" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.618157 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b51452-43ab-409e-b220-9032343f5f9c" containerName="registry-server" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.618207 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="050c15f4-8e90-4961-bb14-eaa0aa868b8c" containerName="oauth-openshift" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.619032 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.642205 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz"] Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.648966 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-error\") pod \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.649036 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mflw\" (UniqueName: \"kubernetes.io/projected/050c15f4-8e90-4961-bb14-eaa0aa868b8c-kube-api-access-8mflw\") pod \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.649090 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/050c15f4-8e90-4961-bb14-eaa0aa868b8c-audit-dir\") pod \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.649130 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-serving-cert\") pod \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.649162 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-idp-0-file-data\") pod \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.649254 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-audit-policies\") pod \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.649354 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/050c15f4-8e90-4961-bb14-eaa0aa868b8c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "050c15f4-8e90-4961-bb14-eaa0aa868b8c" (UID: "050c15f4-8e90-4961-bb14-eaa0aa868b8c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.649809 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-session\") pod \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.649875 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-login\") pod \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.649915 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-cliconfig\") pod \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.649999 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-provider-selection\") pod \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.650045 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-router-certs\") pod \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.650089 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-service-ca\") pod \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.650357 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-ocp-branding-template\") pod \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.650424 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-trusted-ca-bundle\") pod \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\" (UID: \"050c15f4-8e90-4961-bb14-eaa0aa868b8c\") " Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.650831 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "050c15f4-8e90-4961-bb14-eaa0aa868b8c" (UID: "050c15f4-8e90-4961-bb14-eaa0aa868b8c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.651350 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "050c15f4-8e90-4961-bb14-eaa0aa868b8c" (UID: "050c15f4-8e90-4961-bb14-eaa0aa868b8c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.651898 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "050c15f4-8e90-4961-bb14-eaa0aa868b8c" (UID: "050c15f4-8e90-4961-bb14-eaa0aa868b8c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.655705 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "050c15f4-8e90-4961-bb14-eaa0aa868b8c" (UID: "050c15f4-8e90-4961-bb14-eaa0aa868b8c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.659406 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050c15f4-8e90-4961-bb14-eaa0aa868b8c-kube-api-access-8mflw" (OuterVolumeSpecName: "kube-api-access-8mflw") pod "050c15f4-8e90-4961-bb14-eaa0aa868b8c" (UID: "050c15f4-8e90-4961-bb14-eaa0aa868b8c"). InnerVolumeSpecName "kube-api-access-8mflw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.661003 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "050c15f4-8e90-4961-bb14-eaa0aa868b8c" (UID: "050c15f4-8e90-4961-bb14-eaa0aa868b8c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.661471 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "050c15f4-8e90-4961-bb14-eaa0aa868b8c" (UID: "050c15f4-8e90-4961-bb14-eaa0aa868b8c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.661697 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "050c15f4-8e90-4961-bb14-eaa0aa868b8c" (UID: "050c15f4-8e90-4961-bb14-eaa0aa868b8c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.661917 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "050c15f4-8e90-4961-bb14-eaa0aa868b8c" (UID: "050c15f4-8e90-4961-bb14-eaa0aa868b8c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.663015 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "050c15f4-8e90-4961-bb14-eaa0aa868b8c" (UID: "050c15f4-8e90-4961-bb14-eaa0aa868b8c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.671703 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "050c15f4-8e90-4961-bb14-eaa0aa868b8c" (UID: "050c15f4-8e90-4961-bb14-eaa0aa868b8c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.673826 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "050c15f4-8e90-4961-bb14-eaa0aa868b8c" (UID: "050c15f4-8e90-4961-bb14-eaa0aa868b8c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.677194 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "050c15f4-8e90-4961-bb14-eaa0aa868b8c" (UID: "050c15f4-8e90-4961-bb14-eaa0aa868b8c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.703763 4814 generic.go:334] "Generic (PLEG): container finished" podID="050c15f4-8e90-4961-bb14-eaa0aa868b8c" containerID="e538bdd7d67e6f1a4d48394bcfad569fa18442b89a21874e0cf1d4ba546f0a36" exitCode=0 Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.704669 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.705091 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" event={"ID":"050c15f4-8e90-4961-bb14-eaa0aa868b8c","Type":"ContainerDied","Data":"e538bdd7d67e6f1a4d48394bcfad569fa18442b89a21874e0cf1d4ba546f0a36"} Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.705127 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hjjzp" event={"ID":"050c15f4-8e90-4961-bb14-eaa0aa868b8c","Type":"ContainerDied","Data":"89a9784bdfb52b87c600d4183fc9a5398ce0e780343d0b7d9f82fa24ccccd78f"} Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.705145 4814 scope.go:117] "RemoveContainer" containerID="e538bdd7d67e6f1a4d48394bcfad569fa18442b89a21874e0cf1d4ba546f0a36" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.729866 4814 scope.go:117] "RemoveContainer" containerID="e538bdd7d67e6f1a4d48394bcfad569fa18442b89a21874e0cf1d4ba546f0a36" Dec 02 14:32:33 crc kubenswrapper[4814]: E1202 14:32:33.731098 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e538bdd7d67e6f1a4d48394bcfad569fa18442b89a21874e0cf1d4ba546f0a36\": container with ID starting with e538bdd7d67e6f1a4d48394bcfad569fa18442b89a21874e0cf1d4ba546f0a36 not found: ID does not exist" containerID="e538bdd7d67e6f1a4d48394bcfad569fa18442b89a21874e0cf1d4ba546f0a36" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.731153 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e538bdd7d67e6f1a4d48394bcfad569fa18442b89a21874e0cf1d4ba546f0a36"} err="failed to get container status \"e538bdd7d67e6f1a4d48394bcfad569fa18442b89a21874e0cf1d4ba546f0a36\": rpc error: code = NotFound desc = could not find container \"e538bdd7d67e6f1a4d48394bcfad569fa18442b89a21874e0cf1d4ba546f0a36\": container with ID starting with e538bdd7d67e6f1a4d48394bcfad569fa18442b89a21874e0cf1d4ba546f0a36 not found: ID does not exist" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.732355 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hjjzp"] Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.742559 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hjjzp"] Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752126 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-session\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752176 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752207 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-user-template-login\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752273 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752307 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752337 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-user-template-error\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752372 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9ac97874-7713-4408-a805-d12d818623b2-audit-policies\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752403 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mgfz\" (UniqueName: \"kubernetes.io/projected/9ac97874-7713-4408-a805-d12d818623b2-kube-api-access-6mgfz\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752423 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752451 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752475 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752493 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752524 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9ac97874-7713-4408-a805-d12d818623b2-audit-dir\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752543 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752583 4814 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752597 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752608 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752619 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752631 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752644 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752656 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752667 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752678 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752689 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752699 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mflw\" (UniqueName: \"kubernetes.io/projected/050c15f4-8e90-4961-bb14-eaa0aa868b8c-kube-api-access-8mflw\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752710 4814 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/050c15f4-8e90-4961-bb14-eaa0aa868b8c-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752720 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.752730 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/050c15f4-8e90-4961-bb14-eaa0aa868b8c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.791080 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhgkx"] Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.791527 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lhgkx" podUID="4388da57-5087-4bb1-8ceb-55603a4ddf9b" containerName="registry-server" containerID="cri-o://f583378f006f922e34e2c6d74895318e98ed4c25b29ced73179408bbca3c5b31" gracePeriod=2 Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.853828 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.853910 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.853961 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.854028 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9ac97874-7713-4408-a805-d12d818623b2-audit-dir\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.854066 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.854153 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9ac97874-7713-4408-a805-d12d818623b2-audit-dir\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.854590 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-session\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.854655 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.854692 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-user-template-login\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.854746 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.854787 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.854831 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-user-template-error\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.854892 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9ac97874-7713-4408-a805-d12d818623b2-audit-policies\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.854959 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mgfz\" (UniqueName: \"kubernetes.io/projected/9ac97874-7713-4408-a805-d12d818623b2-kube-api-access-6mgfz\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.855005 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.856616 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.857299 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9ac97874-7713-4408-a805-d12d818623b2-audit-policies\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.857577 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.858372 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.861524 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.862005 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-session\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.862566 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.864208 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.865327 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-user-template-login\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.866145 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-user-template-error\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.869397 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.872140 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9ac97874-7713-4408-a805-d12d818623b2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.884538 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mgfz\" (UniqueName: \"kubernetes.io/projected/9ac97874-7713-4408-a805-d12d818623b2-kube-api-access-6mgfz\") pod \"oauth-openshift-5f7dff4c75-s2vpz\" (UID: \"9ac97874-7713-4408-a805-d12d818623b2\") " pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:33 crc kubenswrapper[4814]: I1202 14:32:33.940681 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.159617 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050c15f4-8e90-4961-bb14-eaa0aa868b8c" path="/var/lib/kubelet/pods/050c15f4-8e90-4961-bb14-eaa0aa868b8c/volumes" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.252675 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.362262 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4388da57-5087-4bb1-8ceb-55603a4ddf9b-catalog-content\") pod \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\" (UID: \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\") " Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.362334 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8nb5\" (UniqueName: \"kubernetes.io/projected/4388da57-5087-4bb1-8ceb-55603a4ddf9b-kube-api-access-l8nb5\") pod \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\" (UID: \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\") " Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.362436 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4388da57-5087-4bb1-8ceb-55603a4ddf9b-utilities\") pod \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\" (UID: \"4388da57-5087-4bb1-8ceb-55603a4ddf9b\") " Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.363704 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4388da57-5087-4bb1-8ceb-55603a4ddf9b-utilities" (OuterVolumeSpecName: "utilities") pod "4388da57-5087-4bb1-8ceb-55603a4ddf9b" (UID: "4388da57-5087-4bb1-8ceb-55603a4ddf9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.369415 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4388da57-5087-4bb1-8ceb-55603a4ddf9b-kube-api-access-l8nb5" (OuterVolumeSpecName: "kube-api-access-l8nb5") pod "4388da57-5087-4bb1-8ceb-55603a4ddf9b" (UID: "4388da57-5087-4bb1-8ceb-55603a4ddf9b"). InnerVolumeSpecName "kube-api-access-l8nb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.379629 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4388da57-5087-4bb1-8ceb-55603a4ddf9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4388da57-5087-4bb1-8ceb-55603a4ddf9b" (UID: "4388da57-5087-4bb1-8ceb-55603a4ddf9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.464151 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4388da57-5087-4bb1-8ceb-55603a4ddf9b-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.464202 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4388da57-5087-4bb1-8ceb-55603a4ddf9b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.464221 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8nb5\" (UniqueName: \"kubernetes.io/projected/4388da57-5087-4bb1-8ceb-55603a4ddf9b-kube-api-access-l8nb5\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.481910 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz"] Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.712609 4814 generic.go:334] "Generic (PLEG): container finished" podID="4388da57-5087-4bb1-8ceb-55603a4ddf9b" containerID="f583378f006f922e34e2c6d74895318e98ed4c25b29ced73179408bbca3c5b31" exitCode=0 Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.712729 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhgkx" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.712723 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhgkx" event={"ID":"4388da57-5087-4bb1-8ceb-55603a4ddf9b","Type":"ContainerDied","Data":"f583378f006f922e34e2c6d74895318e98ed4c25b29ced73179408bbca3c5b31"} Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.713340 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhgkx" event={"ID":"4388da57-5087-4bb1-8ceb-55603a4ddf9b","Type":"ContainerDied","Data":"b396a80e88638ce5000eb8463ed121b1fecd6603aa18de196ec858b64dd89be1"} Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.713391 4814 scope.go:117] "RemoveContainer" containerID="f583378f006f922e34e2c6d74895318e98ed4c25b29ced73179408bbca3c5b31" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.716765 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" event={"ID":"9ac97874-7713-4408-a805-d12d818623b2","Type":"ContainerStarted","Data":"320e0fb025d957b5a7cbebf90131f021f9aae7be05cb38d2c0ee69e92bab254a"} Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.739030 4814 scope.go:117] "RemoveContainer" containerID="259c79ad3148f5d6bf0717ec49335f1a9740f257618ef113d5c209693a8b35fd" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.758515 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhgkx"] Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.766305 4814 scope.go:117] "RemoveContainer" containerID="92d1f9902eecdac1037c7bbd912a6182aba1cbfff44d3521f63216752b6b07f9" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.769415 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhgkx"] Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.798767 4814 scope.go:117] "RemoveContainer" containerID="f583378f006f922e34e2c6d74895318e98ed4c25b29ced73179408bbca3c5b31" Dec 02 14:32:34 crc kubenswrapper[4814]: E1202 14:32:34.800313 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f583378f006f922e34e2c6d74895318e98ed4c25b29ced73179408bbca3c5b31\": container with ID starting with f583378f006f922e34e2c6d74895318e98ed4c25b29ced73179408bbca3c5b31 not found: ID does not exist" containerID="f583378f006f922e34e2c6d74895318e98ed4c25b29ced73179408bbca3c5b31" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.800382 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f583378f006f922e34e2c6d74895318e98ed4c25b29ced73179408bbca3c5b31"} err="failed to get container status \"f583378f006f922e34e2c6d74895318e98ed4c25b29ced73179408bbca3c5b31\": rpc error: code = NotFound desc = could not find container \"f583378f006f922e34e2c6d74895318e98ed4c25b29ced73179408bbca3c5b31\": container with ID starting with f583378f006f922e34e2c6d74895318e98ed4c25b29ced73179408bbca3c5b31 not found: ID does not exist" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.800417 4814 scope.go:117] "RemoveContainer" containerID="259c79ad3148f5d6bf0717ec49335f1a9740f257618ef113d5c209693a8b35fd" Dec 02 14:32:34 crc kubenswrapper[4814]: E1202 14:32:34.801146 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"259c79ad3148f5d6bf0717ec49335f1a9740f257618ef113d5c209693a8b35fd\": container with ID starting with 259c79ad3148f5d6bf0717ec49335f1a9740f257618ef113d5c209693a8b35fd not found: ID does not exist" containerID="259c79ad3148f5d6bf0717ec49335f1a9740f257618ef113d5c209693a8b35fd" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.801263 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"259c79ad3148f5d6bf0717ec49335f1a9740f257618ef113d5c209693a8b35fd"} err="failed to get container status \"259c79ad3148f5d6bf0717ec49335f1a9740f257618ef113d5c209693a8b35fd\": rpc error: code = NotFound desc = could not find container \"259c79ad3148f5d6bf0717ec49335f1a9740f257618ef113d5c209693a8b35fd\": container with ID starting with 259c79ad3148f5d6bf0717ec49335f1a9740f257618ef113d5c209693a8b35fd not found: ID does not exist" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.801307 4814 scope.go:117] "RemoveContainer" containerID="92d1f9902eecdac1037c7bbd912a6182aba1cbfff44d3521f63216752b6b07f9" Dec 02 14:32:34 crc kubenswrapper[4814]: E1202 14:32:34.801704 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92d1f9902eecdac1037c7bbd912a6182aba1cbfff44d3521f63216752b6b07f9\": container with ID starting with 92d1f9902eecdac1037c7bbd912a6182aba1cbfff44d3521f63216752b6b07f9 not found: ID does not exist" containerID="92d1f9902eecdac1037c7bbd912a6182aba1cbfff44d3521f63216752b6b07f9" Dec 02 14:32:34 crc kubenswrapper[4814]: I1202 14:32:34.801779 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92d1f9902eecdac1037c7bbd912a6182aba1cbfff44d3521f63216752b6b07f9"} err="failed to get container status \"92d1f9902eecdac1037c7bbd912a6182aba1cbfff44d3521f63216752b6b07f9\": rpc error: code = NotFound desc = could not find container \"92d1f9902eecdac1037c7bbd912a6182aba1cbfff44d3521f63216752b6b07f9\": container with ID starting with 92d1f9902eecdac1037c7bbd912a6182aba1cbfff44d3521f63216752b6b07f9 not found: ID does not exist" Dec 02 14:32:35 crc kubenswrapper[4814]: I1202 14:32:35.729364 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" event={"ID":"9ac97874-7713-4408-a805-d12d818623b2","Type":"ContainerStarted","Data":"bc509f0151f865dfcee0b909b9e49e0ccdd0ab7726d3ea510d1be7b7cad49cc5"} Dec 02 14:32:35 crc kubenswrapper[4814]: I1202 14:32:35.729715 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:35 crc kubenswrapper[4814]: I1202 14:32:35.739358 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" Dec 02 14:32:35 crc kubenswrapper[4814]: I1202 14:32:35.810769 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5f7dff4c75-s2vpz" podStartSLOduration=29.810730495 podStartE2EDuration="29.810730495s" podCreationTimestamp="2025-12-02 14:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:32:35.76943001 +0000 UTC m=+217.893792304" watchObservedRunningTime="2025-12-02 14:32:35.810730495 +0000 UTC m=+217.935092749" Dec 02 14:32:36 crc kubenswrapper[4814]: I1202 14:32:36.155205 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4388da57-5087-4bb1-8ceb-55603a4ddf9b" path="/var/lib/kubelet/pods/4388da57-5087-4bb1-8ceb-55603a4ddf9b/volumes" Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.837509 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7xfzc"] Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.841581 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sxpmb"] Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.841895 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7xfzc" podUID="f714bdcc-9476-4ed4-a164-5e6e5657e0a3" containerName="registry-server" containerID="cri-o://91d53366348b7a917ae0f2b194dcaa320c248fedbfcf1461519582a30e55a82e" gracePeriod=30 Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.842483 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sxpmb" podUID="638fc62a-4504-4762-bbaa-e7a6a6dfbca0" containerName="registry-server" containerID="cri-o://a7e741870bc8a19bb82ad85146755d63cbd02b80121c09df3e5bf873f3ca520e" gracePeriod=30 Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.868835 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fnp2w"] Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.869137 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" podUID="a6316d49-679c-4393-bae7-5ca27876cce4" containerName="marketplace-operator" containerID="cri-o://dc0dc7b0fe7d03a9f2adadd72bf8952d65e89a36a3d3c8946a81b134899f97ae" gracePeriod=30 Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.878448 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcv28"] Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.878798 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lcv28" podUID="4d9788e9-39e1-4f60-be8a-891c0c224019" containerName="registry-server" containerID="cri-o://4bae04dc07a69137e551dc9e56ce95afcd64323a208371e6eeb46876a2aeb954" gracePeriod=30 Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.882190 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v44dt"] Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.882530 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v44dt" podUID="70239f99-6b21-408d-9506-f1b9afbafd90" containerName="registry-server" containerID="cri-o://58a8fc2af3606a18fc2ab9ffde74a7faaa7b54f9214ffc82a081c72f3a62412f" gracePeriod=30 Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.891159 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wmlq8"] Dec 02 14:32:46 crc kubenswrapper[4814]: E1202 14:32:46.891437 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4388da57-5087-4bb1-8ceb-55603a4ddf9b" containerName="registry-server" Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.891452 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4388da57-5087-4bb1-8ceb-55603a4ddf9b" containerName="registry-server" Dec 02 14:32:46 crc kubenswrapper[4814]: E1202 14:32:46.891473 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4388da57-5087-4bb1-8ceb-55603a4ddf9b" containerName="extract-content" Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.891482 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4388da57-5087-4bb1-8ceb-55603a4ddf9b" containerName="extract-content" Dec 02 14:32:46 crc kubenswrapper[4814]: E1202 14:32:46.891495 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4388da57-5087-4bb1-8ceb-55603a4ddf9b" containerName="extract-utilities" Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.891503 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4388da57-5087-4bb1-8ceb-55603a4ddf9b" containerName="extract-utilities" Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.891595 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="4388da57-5087-4bb1-8ceb-55603a4ddf9b" containerName="registry-server" Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.892525 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" Dec 02 14:32:46 crc kubenswrapper[4814]: I1202 14:32:46.959523 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wmlq8"] Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.051105 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd894607-26ea-4312-98b1-99c3cdcd522a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wmlq8\" (UID: \"cd894607-26ea-4312-98b1-99c3cdcd522a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.051188 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx7zp\" (UniqueName: \"kubernetes.io/projected/cd894607-26ea-4312-98b1-99c3cdcd522a-kube-api-access-tx7zp\") pod \"marketplace-operator-79b997595-wmlq8\" (UID: \"cd894607-26ea-4312-98b1-99c3cdcd522a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.051210 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cd894607-26ea-4312-98b1-99c3cdcd522a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wmlq8\" (UID: \"cd894607-26ea-4312-98b1-99c3cdcd522a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.154761 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd894607-26ea-4312-98b1-99c3cdcd522a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wmlq8\" (UID: \"cd894607-26ea-4312-98b1-99c3cdcd522a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.154867 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx7zp\" (UniqueName: \"kubernetes.io/projected/cd894607-26ea-4312-98b1-99c3cdcd522a-kube-api-access-tx7zp\") pod \"marketplace-operator-79b997595-wmlq8\" (UID: \"cd894607-26ea-4312-98b1-99c3cdcd522a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.154887 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cd894607-26ea-4312-98b1-99c3cdcd522a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wmlq8\" (UID: \"cd894607-26ea-4312-98b1-99c3cdcd522a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.157005 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd894607-26ea-4312-98b1-99c3cdcd522a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wmlq8\" (UID: \"cd894607-26ea-4312-98b1-99c3cdcd522a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.162861 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cd894607-26ea-4312-98b1-99c3cdcd522a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wmlq8\" (UID: \"cd894607-26ea-4312-98b1-99c3cdcd522a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.177434 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx7zp\" (UniqueName: \"kubernetes.io/projected/cd894607-26ea-4312-98b1-99c3cdcd522a-kube-api-access-tx7zp\") pod \"marketplace-operator-79b997595-wmlq8\" (UID: \"cd894607-26ea-4312-98b1-99c3cdcd522a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.215925 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.356878 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.432757 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.437374 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.441910 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.461668 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq9zn\" (UniqueName: \"kubernetes.io/projected/a6316d49-679c-4393-bae7-5ca27876cce4-kube-api-access-wq9zn\") pod \"a6316d49-679c-4393-bae7-5ca27876cce4\" (UID: \"a6316d49-679c-4393-bae7-5ca27876cce4\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.462004 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-trusted-ca\") pod \"a6316d49-679c-4393-bae7-5ca27876cce4\" (UID: \"a6316d49-679c-4393-bae7-5ca27876cce4\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.462127 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-operator-metrics\") pod \"a6316d49-679c-4393-bae7-5ca27876cce4\" (UID: \"a6316d49-679c-4393-bae7-5ca27876cce4\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.462854 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a6316d49-679c-4393-bae7-5ca27876cce4" (UID: "a6316d49-679c-4393-bae7-5ca27876cce4"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.468166 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6316d49-679c-4393-bae7-5ca27876cce4-kube-api-access-wq9zn" (OuterVolumeSpecName: "kube-api-access-wq9zn") pod "a6316d49-679c-4393-bae7-5ca27876cce4" (UID: "a6316d49-679c-4393-bae7-5ca27876cce4"). InnerVolumeSpecName "kube-api-access-wq9zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.468203 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a6316d49-679c-4393-bae7-5ca27876cce4" (UID: "a6316d49-679c-4393-bae7-5ca27876cce4"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.563486 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70239f99-6b21-408d-9506-f1b9afbafd90-catalog-content\") pod \"70239f99-6b21-408d-9506-f1b9afbafd90\" (UID: \"70239f99-6b21-408d-9506-f1b9afbafd90\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.563573 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpsmf\" (UniqueName: \"kubernetes.io/projected/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-kube-api-access-bpsmf\") pod \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\" (UID: \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.563622 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70239f99-6b21-408d-9506-f1b9afbafd90-utilities\") pod \"70239f99-6b21-408d-9506-f1b9afbafd90\" (UID: \"70239f99-6b21-408d-9506-f1b9afbafd90\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.563652 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcqpp\" (UniqueName: \"kubernetes.io/projected/70239f99-6b21-408d-9506-f1b9afbafd90-kube-api-access-mcqpp\") pod \"70239f99-6b21-408d-9506-f1b9afbafd90\" (UID: \"70239f99-6b21-408d-9506-f1b9afbafd90\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.563690 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9788e9-39e1-4f60-be8a-891c0c224019-catalog-content\") pod \"4d9788e9-39e1-4f60-be8a-891c0c224019\" (UID: \"4d9788e9-39e1-4f60-be8a-891c0c224019\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.563772 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g5n4\" (UniqueName: \"kubernetes.io/projected/4d9788e9-39e1-4f60-be8a-891c0c224019-kube-api-access-6g5n4\") pod \"4d9788e9-39e1-4f60-be8a-891c0c224019\" (UID: \"4d9788e9-39e1-4f60-be8a-891c0c224019\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.563803 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-utilities\") pod \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\" (UID: \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.564147 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9788e9-39e1-4f60-be8a-891c0c224019-utilities\") pod \"4d9788e9-39e1-4f60-be8a-891c0c224019\" (UID: \"4d9788e9-39e1-4f60-be8a-891c0c224019\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.564175 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-catalog-content\") pod \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\" (UID: \"638fc62a-4504-4762-bbaa-e7a6a6dfbca0\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.564778 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70239f99-6b21-408d-9506-f1b9afbafd90-utilities" (OuterVolumeSpecName: "utilities") pod "70239f99-6b21-408d-9506-f1b9afbafd90" (UID: "70239f99-6b21-408d-9506-f1b9afbafd90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.566188 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-utilities" (OuterVolumeSpecName: "utilities") pod "638fc62a-4504-4762-bbaa-e7a6a6dfbca0" (UID: "638fc62a-4504-4762-bbaa-e7a6a6dfbca0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.566300 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d9788e9-39e1-4f60-be8a-891c0c224019-utilities" (OuterVolumeSpecName: "utilities") pod "4d9788e9-39e1-4f60-be8a-891c0c224019" (UID: "4d9788e9-39e1-4f60-be8a-891c0c224019"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.568406 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9788e9-39e1-4f60-be8a-891c0c224019-kube-api-access-6g5n4" (OuterVolumeSpecName: "kube-api-access-6g5n4") pod "4d9788e9-39e1-4f60-be8a-891c0c224019" (UID: "4d9788e9-39e1-4f60-be8a-891c0c224019"). InnerVolumeSpecName "kube-api-access-6g5n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.570604 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g5n4\" (UniqueName: \"kubernetes.io/projected/4d9788e9-39e1-4f60-be8a-891c0c224019-kube-api-access-6g5n4\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.570636 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.570649 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9788e9-39e1-4f60-be8a-891c0c224019-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.570663 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq9zn\" (UniqueName: \"kubernetes.io/projected/a6316d49-679c-4393-bae7-5ca27876cce4-kube-api-access-wq9zn\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.570677 4814 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.570692 4814 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a6316d49-679c-4393-bae7-5ca27876cce4-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.570708 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70239f99-6b21-408d-9506-f1b9afbafd90-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.571010 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70239f99-6b21-408d-9506-f1b9afbafd90-kube-api-access-mcqpp" (OuterVolumeSpecName: "kube-api-access-mcqpp") pod "70239f99-6b21-408d-9506-f1b9afbafd90" (UID: "70239f99-6b21-408d-9506-f1b9afbafd90"). InnerVolumeSpecName "kube-api-access-mcqpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.572032 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-kube-api-access-bpsmf" (OuterVolumeSpecName: "kube-api-access-bpsmf") pod "638fc62a-4504-4762-bbaa-e7a6a6dfbca0" (UID: "638fc62a-4504-4762-bbaa-e7a6a6dfbca0"). InnerVolumeSpecName "kube-api-access-bpsmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.590420 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d9788e9-39e1-4f60-be8a-891c0c224019-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d9788e9-39e1-4f60-be8a-891c0c224019" (UID: "4d9788e9-39e1-4f60-be8a-891c0c224019"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.615461 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.626317 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "638fc62a-4504-4762-bbaa-e7a6a6dfbca0" (UID: "638fc62a-4504-4762-bbaa-e7a6a6dfbca0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.672141 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcqpp\" (UniqueName: \"kubernetes.io/projected/70239f99-6b21-408d-9506-f1b9afbafd90-kube-api-access-mcqpp\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.672175 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9788e9-39e1-4f60-be8a-891c0c224019-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.672188 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.672196 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpsmf\" (UniqueName: \"kubernetes.io/projected/638fc62a-4504-4762-bbaa-e7a6a6dfbca0-kube-api-access-bpsmf\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.718137 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70239f99-6b21-408d-9506-f1b9afbafd90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70239f99-6b21-408d-9506-f1b9afbafd90" (UID: "70239f99-6b21-408d-9506-f1b9afbafd90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.773426 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-utilities\") pod \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\" (UID: \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.773614 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-catalog-content\") pod \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\" (UID: \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.773742 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm7qf\" (UniqueName: \"kubernetes.io/projected/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-kube-api-access-fm7qf\") pod \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\" (UID: \"f714bdcc-9476-4ed4-a164-5e6e5657e0a3\") " Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.774119 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70239f99-6b21-408d-9506-f1b9afbafd90-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.774142 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-utilities" (OuterVolumeSpecName: "utilities") pod "f714bdcc-9476-4ed4-a164-5e6e5657e0a3" (UID: "f714bdcc-9476-4ed4-a164-5e6e5657e0a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.818997 4814 generic.go:334] "Generic (PLEG): container finished" podID="a6316d49-679c-4393-bae7-5ca27876cce4" containerID="dc0dc7b0fe7d03a9f2adadd72bf8952d65e89a36a3d3c8946a81b134899f97ae" exitCode=0 Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.819076 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" event={"ID":"a6316d49-679c-4393-bae7-5ca27876cce4","Type":"ContainerDied","Data":"dc0dc7b0fe7d03a9f2adadd72bf8952d65e89a36a3d3c8946a81b134899f97ae"} Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.819089 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.819106 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fnp2w" event={"ID":"a6316d49-679c-4393-bae7-5ca27876cce4","Type":"ContainerDied","Data":"f8db3fe96068ac94336dddc2795788600bc0c47549af935dedc307400e392bec"} Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.819124 4814 scope.go:117] "RemoveContainer" containerID="dc0dc7b0fe7d03a9f2adadd72bf8952d65e89a36a3d3c8946a81b134899f97ae" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.820769 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f714bdcc-9476-4ed4-a164-5e6e5657e0a3" (UID: "f714bdcc-9476-4ed4-a164-5e6e5657e0a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.822652 4814 generic.go:334] "Generic (PLEG): container finished" podID="638fc62a-4504-4762-bbaa-e7a6a6dfbca0" containerID="a7e741870bc8a19bb82ad85146755d63cbd02b80121c09df3e5bf873f3ca520e" exitCode=0 Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.822734 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxpmb" event={"ID":"638fc62a-4504-4762-bbaa-e7a6a6dfbca0","Type":"ContainerDied","Data":"a7e741870bc8a19bb82ad85146755d63cbd02b80121c09df3e5bf873f3ca520e"} Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.822771 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxpmb" event={"ID":"638fc62a-4504-4762-bbaa-e7a6a6dfbca0","Type":"ContainerDied","Data":"262148ac0824976a3012214fd3a7cb2ad543eb55184b7bae1a2fc87e53609b29"} Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.822769 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxpmb" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.826222 4814 generic.go:334] "Generic (PLEG): container finished" podID="70239f99-6b21-408d-9506-f1b9afbafd90" containerID="58a8fc2af3606a18fc2ab9ffde74a7faaa7b54f9214ffc82a081c72f3a62412f" exitCode=0 Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.826290 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v44dt" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.826291 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v44dt" event={"ID":"70239f99-6b21-408d-9506-f1b9afbafd90","Type":"ContainerDied","Data":"58a8fc2af3606a18fc2ab9ffde74a7faaa7b54f9214ffc82a081c72f3a62412f"} Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.826408 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v44dt" event={"ID":"70239f99-6b21-408d-9506-f1b9afbafd90","Type":"ContainerDied","Data":"04070dc0ce56073ff769def7d2b3d08b51616ae92cdb89db552b201cef7f53de"} Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.831610 4814 generic.go:334] "Generic (PLEG): container finished" podID="f714bdcc-9476-4ed4-a164-5e6e5657e0a3" containerID="91d53366348b7a917ae0f2b194dcaa320c248fedbfcf1461519582a30e55a82e" exitCode=0 Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.831941 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xfzc" event={"ID":"f714bdcc-9476-4ed4-a164-5e6e5657e0a3","Type":"ContainerDied","Data":"91d53366348b7a917ae0f2b194dcaa320c248fedbfcf1461519582a30e55a82e"} Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.832000 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xfzc" event={"ID":"f714bdcc-9476-4ed4-a164-5e6e5657e0a3","Type":"ContainerDied","Data":"e577d2db7688c7c43b0b567f028c422aeede99d4dd5267f70089c820094f0df1"} Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.832214 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xfzc" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.835677 4814 generic.go:334] "Generic (PLEG): container finished" podID="4d9788e9-39e1-4f60-be8a-891c0c224019" containerID="4bae04dc07a69137e551dc9e56ce95afcd64323a208371e6eeb46876a2aeb954" exitCode=0 Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.835712 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcv28" event={"ID":"4d9788e9-39e1-4f60-be8a-891c0c224019","Type":"ContainerDied","Data":"4bae04dc07a69137e551dc9e56ce95afcd64323a208371e6eeb46876a2aeb954"} Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.835732 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcv28" event={"ID":"4d9788e9-39e1-4f60-be8a-891c0c224019","Type":"ContainerDied","Data":"3305fa127c8c08df9132c828ccb2e11649a138e06c4f04a1752adbc37f692f0d"} Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.835809 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcv28" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.856107 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sxpmb"] Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.864782 4814 scope.go:117] "RemoveContainer" containerID="dc0dc7b0fe7d03a9f2adadd72bf8952d65e89a36a3d3c8946a81b134899f97ae" Dec 02 14:32:47 crc kubenswrapper[4814]: E1202 14:32:47.865351 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc0dc7b0fe7d03a9f2adadd72bf8952d65e89a36a3d3c8946a81b134899f97ae\": container with ID starting with dc0dc7b0fe7d03a9f2adadd72bf8952d65e89a36a3d3c8946a81b134899f97ae not found: ID does not exist" containerID="dc0dc7b0fe7d03a9f2adadd72bf8952d65e89a36a3d3c8946a81b134899f97ae" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.865392 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc0dc7b0fe7d03a9f2adadd72bf8952d65e89a36a3d3c8946a81b134899f97ae"} err="failed to get container status \"dc0dc7b0fe7d03a9f2adadd72bf8952d65e89a36a3d3c8946a81b134899f97ae\": rpc error: code = NotFound desc = could not find container \"dc0dc7b0fe7d03a9f2adadd72bf8952d65e89a36a3d3c8946a81b134899f97ae\": container with ID starting with dc0dc7b0fe7d03a9f2adadd72bf8952d65e89a36a3d3c8946a81b134899f97ae not found: ID does not exist" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.865422 4814 scope.go:117] "RemoveContainer" containerID="a7e741870bc8a19bb82ad85146755d63cbd02b80121c09df3e5bf873f3ca520e" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.868880 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sxpmb"] Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.876405 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.876438 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.881638 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fnp2w"] Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.885453 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fnp2w"] Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.895043 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-kube-api-access-fm7qf" (OuterVolumeSpecName: "kube-api-access-fm7qf") pod "f714bdcc-9476-4ed4-a164-5e6e5657e0a3" (UID: "f714bdcc-9476-4ed4-a164-5e6e5657e0a3"). InnerVolumeSpecName "kube-api-access-fm7qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.904453 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcv28"] Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.907216 4814 scope.go:117] "RemoveContainer" containerID="f8660bf30e75e4ae6ed126be6512c6c6390b2942d0d0f3f4de11b7998728a3c2" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.911965 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcv28"] Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.920170 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v44dt"] Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.923983 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v44dt"] Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.927372 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wmlq8"] Dec 02 14:32:47 crc kubenswrapper[4814]: W1202 14:32:47.935327 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd894607_26ea_4312_98b1_99c3cdcd522a.slice/crio-7e55698f800878a5ce51de0dc8dcd1f0766fdd45309e8b1534f2704c10e0506d WatchSource:0}: Error finding container 7e55698f800878a5ce51de0dc8dcd1f0766fdd45309e8b1534f2704c10e0506d: Status 404 returned error can't find the container with id 7e55698f800878a5ce51de0dc8dcd1f0766fdd45309e8b1534f2704c10e0506d Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.947117 4814 scope.go:117] "RemoveContainer" containerID="2af201d318f6e89ff3d54714e2fd25c0b20ef0ca1538a86f295534fda1c08d28" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.968749 4814 scope.go:117] "RemoveContainer" containerID="a7e741870bc8a19bb82ad85146755d63cbd02b80121c09df3e5bf873f3ca520e" Dec 02 14:32:47 crc kubenswrapper[4814]: E1202 14:32:47.969312 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e741870bc8a19bb82ad85146755d63cbd02b80121c09df3e5bf873f3ca520e\": container with ID starting with a7e741870bc8a19bb82ad85146755d63cbd02b80121c09df3e5bf873f3ca520e not found: ID does not exist" containerID="a7e741870bc8a19bb82ad85146755d63cbd02b80121c09df3e5bf873f3ca520e" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.969360 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e741870bc8a19bb82ad85146755d63cbd02b80121c09df3e5bf873f3ca520e"} err="failed to get container status \"a7e741870bc8a19bb82ad85146755d63cbd02b80121c09df3e5bf873f3ca520e\": rpc error: code = NotFound desc = could not find container \"a7e741870bc8a19bb82ad85146755d63cbd02b80121c09df3e5bf873f3ca520e\": container with ID starting with a7e741870bc8a19bb82ad85146755d63cbd02b80121c09df3e5bf873f3ca520e not found: ID does not exist" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.969407 4814 scope.go:117] "RemoveContainer" containerID="f8660bf30e75e4ae6ed126be6512c6c6390b2942d0d0f3f4de11b7998728a3c2" Dec 02 14:32:47 crc kubenswrapper[4814]: E1202 14:32:47.969851 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8660bf30e75e4ae6ed126be6512c6c6390b2942d0d0f3f4de11b7998728a3c2\": container with ID starting with f8660bf30e75e4ae6ed126be6512c6c6390b2942d0d0f3f4de11b7998728a3c2 not found: ID does not exist" containerID="f8660bf30e75e4ae6ed126be6512c6c6390b2942d0d0f3f4de11b7998728a3c2" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.969875 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8660bf30e75e4ae6ed126be6512c6c6390b2942d0d0f3f4de11b7998728a3c2"} err="failed to get container status \"f8660bf30e75e4ae6ed126be6512c6c6390b2942d0d0f3f4de11b7998728a3c2\": rpc error: code = NotFound desc = could not find container \"f8660bf30e75e4ae6ed126be6512c6c6390b2942d0d0f3f4de11b7998728a3c2\": container with ID starting with f8660bf30e75e4ae6ed126be6512c6c6390b2942d0d0f3f4de11b7998728a3c2 not found: ID does not exist" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.969889 4814 scope.go:117] "RemoveContainer" containerID="2af201d318f6e89ff3d54714e2fd25c0b20ef0ca1538a86f295534fda1c08d28" Dec 02 14:32:47 crc kubenswrapper[4814]: E1202 14:32:47.970163 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af201d318f6e89ff3d54714e2fd25c0b20ef0ca1538a86f295534fda1c08d28\": container with ID starting with 2af201d318f6e89ff3d54714e2fd25c0b20ef0ca1538a86f295534fda1c08d28 not found: ID does not exist" containerID="2af201d318f6e89ff3d54714e2fd25c0b20ef0ca1538a86f295534fda1c08d28" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.970184 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af201d318f6e89ff3d54714e2fd25c0b20ef0ca1538a86f295534fda1c08d28"} err="failed to get container status \"2af201d318f6e89ff3d54714e2fd25c0b20ef0ca1538a86f295534fda1c08d28\": rpc error: code = NotFound desc = could not find container \"2af201d318f6e89ff3d54714e2fd25c0b20ef0ca1538a86f295534fda1c08d28\": container with ID starting with 2af201d318f6e89ff3d54714e2fd25c0b20ef0ca1538a86f295534fda1c08d28 not found: ID does not exist" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.970200 4814 scope.go:117] "RemoveContainer" containerID="58a8fc2af3606a18fc2ab9ffde74a7faaa7b54f9214ffc82a081c72f3a62412f" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.978248 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm7qf\" (UniqueName: \"kubernetes.io/projected/f714bdcc-9476-4ed4-a164-5e6e5657e0a3-kube-api-access-fm7qf\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:47 crc kubenswrapper[4814]: I1202 14:32:47.995438 4814 scope.go:117] "RemoveContainer" containerID="f67386d6142dbeed5bfd0d0ba5462c2f4ebacf4839d79a79272dc10ddae79b7e" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.073920 4814 scope.go:117] "RemoveContainer" containerID="2fb7e07005f0478337b2a175650b32c1818e24c1ad0c0d1f950aa037272b5aea" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.093318 4814 scope.go:117] "RemoveContainer" containerID="58a8fc2af3606a18fc2ab9ffde74a7faaa7b54f9214ffc82a081c72f3a62412f" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.095561 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58a8fc2af3606a18fc2ab9ffde74a7faaa7b54f9214ffc82a081c72f3a62412f\": container with ID starting with 58a8fc2af3606a18fc2ab9ffde74a7faaa7b54f9214ffc82a081c72f3a62412f not found: ID does not exist" containerID="58a8fc2af3606a18fc2ab9ffde74a7faaa7b54f9214ffc82a081c72f3a62412f" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.095617 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58a8fc2af3606a18fc2ab9ffde74a7faaa7b54f9214ffc82a081c72f3a62412f"} err="failed to get container status \"58a8fc2af3606a18fc2ab9ffde74a7faaa7b54f9214ffc82a081c72f3a62412f\": rpc error: code = NotFound desc = could not find container \"58a8fc2af3606a18fc2ab9ffde74a7faaa7b54f9214ffc82a081c72f3a62412f\": container with ID starting with 58a8fc2af3606a18fc2ab9ffde74a7faaa7b54f9214ffc82a081c72f3a62412f not found: ID does not exist" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.095663 4814 scope.go:117] "RemoveContainer" containerID="f67386d6142dbeed5bfd0d0ba5462c2f4ebacf4839d79a79272dc10ddae79b7e" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.096224 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f67386d6142dbeed5bfd0d0ba5462c2f4ebacf4839d79a79272dc10ddae79b7e\": container with ID starting with f67386d6142dbeed5bfd0d0ba5462c2f4ebacf4839d79a79272dc10ddae79b7e not found: ID does not exist" containerID="f67386d6142dbeed5bfd0d0ba5462c2f4ebacf4839d79a79272dc10ddae79b7e" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.096299 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f67386d6142dbeed5bfd0d0ba5462c2f4ebacf4839d79a79272dc10ddae79b7e"} err="failed to get container status \"f67386d6142dbeed5bfd0d0ba5462c2f4ebacf4839d79a79272dc10ddae79b7e\": rpc error: code = NotFound desc = could not find container \"f67386d6142dbeed5bfd0d0ba5462c2f4ebacf4839d79a79272dc10ddae79b7e\": container with ID starting with f67386d6142dbeed5bfd0d0ba5462c2f4ebacf4839d79a79272dc10ddae79b7e not found: ID does not exist" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.096360 4814 scope.go:117] "RemoveContainer" containerID="2fb7e07005f0478337b2a175650b32c1818e24c1ad0c0d1f950aa037272b5aea" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.096966 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fb7e07005f0478337b2a175650b32c1818e24c1ad0c0d1f950aa037272b5aea\": container with ID starting with 2fb7e07005f0478337b2a175650b32c1818e24c1ad0c0d1f950aa037272b5aea not found: ID does not exist" containerID="2fb7e07005f0478337b2a175650b32c1818e24c1ad0c0d1f950aa037272b5aea" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.096994 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fb7e07005f0478337b2a175650b32c1818e24c1ad0c0d1f950aa037272b5aea"} err="failed to get container status \"2fb7e07005f0478337b2a175650b32c1818e24c1ad0c0d1f950aa037272b5aea\": rpc error: code = NotFound desc = could not find container \"2fb7e07005f0478337b2a175650b32c1818e24c1ad0c0d1f950aa037272b5aea\": container with ID starting with 2fb7e07005f0478337b2a175650b32c1818e24c1ad0c0d1f950aa037272b5aea not found: ID does not exist" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.097009 4814 scope.go:117] "RemoveContainer" containerID="91d53366348b7a917ae0f2b194dcaa320c248fedbfcf1461519582a30e55a82e" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.116802 4814 scope.go:117] "RemoveContainer" containerID="31e15d99dd35e34e2d0dee7415fb391571dff637269f066be76a4f4df483c30e" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.135278 4814 scope.go:117] "RemoveContainer" containerID="ab55601aef0ac071a50af6db1bd54cc37c36ed8ea7166355557fcfac003d67f7" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.153117 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9788e9-39e1-4f60-be8a-891c0c224019" path="/var/lib/kubelet/pods/4d9788e9-39e1-4f60-be8a-891c0c224019/volumes" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.153826 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="638fc62a-4504-4762-bbaa-e7a6a6dfbca0" path="/var/lib/kubelet/pods/638fc62a-4504-4762-bbaa-e7a6a6dfbca0/volumes" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.154415 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70239f99-6b21-408d-9506-f1b9afbafd90" path="/var/lib/kubelet/pods/70239f99-6b21-408d-9506-f1b9afbafd90/volumes" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.155567 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6316d49-679c-4393-bae7-5ca27876cce4" path="/var/lib/kubelet/pods/a6316d49-679c-4393-bae7-5ca27876cce4/volumes" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.159602 4814 scope.go:117] "RemoveContainer" containerID="91d53366348b7a917ae0f2b194dcaa320c248fedbfcf1461519582a30e55a82e" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.160002 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91d53366348b7a917ae0f2b194dcaa320c248fedbfcf1461519582a30e55a82e\": container with ID starting with 91d53366348b7a917ae0f2b194dcaa320c248fedbfcf1461519582a30e55a82e not found: ID does not exist" containerID="91d53366348b7a917ae0f2b194dcaa320c248fedbfcf1461519582a30e55a82e" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.160037 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91d53366348b7a917ae0f2b194dcaa320c248fedbfcf1461519582a30e55a82e"} err="failed to get container status \"91d53366348b7a917ae0f2b194dcaa320c248fedbfcf1461519582a30e55a82e\": rpc error: code = NotFound desc = could not find container \"91d53366348b7a917ae0f2b194dcaa320c248fedbfcf1461519582a30e55a82e\": container with ID starting with 91d53366348b7a917ae0f2b194dcaa320c248fedbfcf1461519582a30e55a82e not found: ID does not exist" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.160076 4814 scope.go:117] "RemoveContainer" containerID="31e15d99dd35e34e2d0dee7415fb391571dff637269f066be76a4f4df483c30e" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.160496 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31e15d99dd35e34e2d0dee7415fb391571dff637269f066be76a4f4df483c30e\": container with ID starting with 31e15d99dd35e34e2d0dee7415fb391571dff637269f066be76a4f4df483c30e not found: ID does not exist" containerID="31e15d99dd35e34e2d0dee7415fb391571dff637269f066be76a4f4df483c30e" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.160563 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e15d99dd35e34e2d0dee7415fb391571dff637269f066be76a4f4df483c30e"} err="failed to get container status \"31e15d99dd35e34e2d0dee7415fb391571dff637269f066be76a4f4df483c30e\": rpc error: code = NotFound desc = could not find container \"31e15d99dd35e34e2d0dee7415fb391571dff637269f066be76a4f4df483c30e\": container with ID starting with 31e15d99dd35e34e2d0dee7415fb391571dff637269f066be76a4f4df483c30e not found: ID does not exist" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.160607 4814 scope.go:117] "RemoveContainer" containerID="ab55601aef0ac071a50af6db1bd54cc37c36ed8ea7166355557fcfac003d67f7" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.162360 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab55601aef0ac071a50af6db1bd54cc37c36ed8ea7166355557fcfac003d67f7\": container with ID starting with ab55601aef0ac071a50af6db1bd54cc37c36ed8ea7166355557fcfac003d67f7 not found: ID does not exist" containerID="ab55601aef0ac071a50af6db1bd54cc37c36ed8ea7166355557fcfac003d67f7" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.162387 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab55601aef0ac071a50af6db1bd54cc37c36ed8ea7166355557fcfac003d67f7"} err="failed to get container status \"ab55601aef0ac071a50af6db1bd54cc37c36ed8ea7166355557fcfac003d67f7\": rpc error: code = NotFound desc = could not find container \"ab55601aef0ac071a50af6db1bd54cc37c36ed8ea7166355557fcfac003d67f7\": container with ID starting with ab55601aef0ac071a50af6db1bd54cc37c36ed8ea7166355557fcfac003d67f7 not found: ID does not exist" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.162403 4814 scope.go:117] "RemoveContainer" containerID="4bae04dc07a69137e551dc9e56ce95afcd64323a208371e6eeb46876a2aeb954" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.183421 4814 scope.go:117] "RemoveContainer" containerID="fcf54beb0937de0aeaf94f8e1d390114c97650dba327ee58c35f4b34c4e932d2" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.183688 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7xfzc"] Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.186088 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7xfzc"] Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.202793 4814 scope.go:117] "RemoveContainer" containerID="e2d6dc836295e7bbb6a1bcccfc3c1f35286b4054a70ab64b6316a938d2958f39" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.216780 4814 scope.go:117] "RemoveContainer" containerID="4bae04dc07a69137e551dc9e56ce95afcd64323a208371e6eeb46876a2aeb954" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.217140 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bae04dc07a69137e551dc9e56ce95afcd64323a208371e6eeb46876a2aeb954\": container with ID starting with 4bae04dc07a69137e551dc9e56ce95afcd64323a208371e6eeb46876a2aeb954 not found: ID does not exist" containerID="4bae04dc07a69137e551dc9e56ce95afcd64323a208371e6eeb46876a2aeb954" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.217173 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bae04dc07a69137e551dc9e56ce95afcd64323a208371e6eeb46876a2aeb954"} err="failed to get container status \"4bae04dc07a69137e551dc9e56ce95afcd64323a208371e6eeb46876a2aeb954\": rpc error: code = NotFound desc = could not find container \"4bae04dc07a69137e551dc9e56ce95afcd64323a208371e6eeb46876a2aeb954\": container with ID starting with 4bae04dc07a69137e551dc9e56ce95afcd64323a208371e6eeb46876a2aeb954 not found: ID does not exist" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.217201 4814 scope.go:117] "RemoveContainer" containerID="fcf54beb0937de0aeaf94f8e1d390114c97650dba327ee58c35f4b34c4e932d2" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.217557 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcf54beb0937de0aeaf94f8e1d390114c97650dba327ee58c35f4b34c4e932d2\": container with ID starting with fcf54beb0937de0aeaf94f8e1d390114c97650dba327ee58c35f4b34c4e932d2 not found: ID does not exist" containerID="fcf54beb0937de0aeaf94f8e1d390114c97650dba327ee58c35f4b34c4e932d2" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.217611 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf54beb0937de0aeaf94f8e1d390114c97650dba327ee58c35f4b34c4e932d2"} err="failed to get container status \"fcf54beb0937de0aeaf94f8e1d390114c97650dba327ee58c35f4b34c4e932d2\": rpc error: code = NotFound desc = could not find container \"fcf54beb0937de0aeaf94f8e1d390114c97650dba327ee58c35f4b34c4e932d2\": container with ID starting with fcf54beb0937de0aeaf94f8e1d390114c97650dba327ee58c35f4b34c4e932d2 not found: ID does not exist" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.217649 4814 scope.go:117] "RemoveContainer" containerID="e2d6dc836295e7bbb6a1bcccfc3c1f35286b4054a70ab64b6316a938d2958f39" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.218059 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2d6dc836295e7bbb6a1bcccfc3c1f35286b4054a70ab64b6316a938d2958f39\": container with ID starting with e2d6dc836295e7bbb6a1bcccfc3c1f35286b4054a70ab64b6316a938d2958f39 not found: ID does not exist" containerID="e2d6dc836295e7bbb6a1bcccfc3c1f35286b4054a70ab64b6316a938d2958f39" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.218096 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d6dc836295e7bbb6a1bcccfc3c1f35286b4054a70ab64b6316a938d2958f39"} err="failed to get container status \"e2d6dc836295e7bbb6a1bcccfc3c1f35286b4054a70ab64b6316a938d2958f39\": rpc error: code = NotFound desc = could not find container \"e2d6dc836295e7bbb6a1bcccfc3c1f35286b4054a70ab64b6316a938d2958f39\": container with ID starting with e2d6dc836295e7bbb6a1bcccfc3c1f35286b4054a70ab64b6316a938d2958f39 not found: ID does not exist" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.845010 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-csss6"] Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.845313 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638fc62a-4504-4762-bbaa-e7a6a6dfbca0" containerName="extract-content" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.845329 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="638fc62a-4504-4762-bbaa-e7a6a6dfbca0" containerName="extract-content" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.845351 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f714bdcc-9476-4ed4-a164-5e6e5657e0a3" containerName="registry-server" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.845358 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f714bdcc-9476-4ed4-a164-5e6e5657e0a3" containerName="registry-server" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.845369 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9788e9-39e1-4f60-be8a-891c0c224019" containerName="extract-content" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.845378 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9788e9-39e1-4f60-be8a-891c0c224019" containerName="extract-content" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.845446 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638fc62a-4504-4762-bbaa-e7a6a6dfbca0" containerName="extract-utilities" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.845479 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="638fc62a-4504-4762-bbaa-e7a6a6dfbca0" containerName="extract-utilities" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.845486 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9788e9-39e1-4f60-be8a-891c0c224019" containerName="registry-server" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.845493 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9788e9-39e1-4f60-be8a-891c0c224019" containerName="registry-server" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.845503 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6316d49-679c-4393-bae7-5ca27876cce4" containerName="marketplace-operator" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.845511 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6316d49-679c-4393-bae7-5ca27876cce4" containerName="marketplace-operator" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.845522 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70239f99-6b21-408d-9506-f1b9afbafd90" containerName="registry-server" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.845528 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="70239f99-6b21-408d-9506-f1b9afbafd90" containerName="registry-server" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.845535 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70239f99-6b21-408d-9506-f1b9afbafd90" containerName="extract-content" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.845540 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="70239f99-6b21-408d-9506-f1b9afbafd90" containerName="extract-content" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.845548 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f714bdcc-9476-4ed4-a164-5e6e5657e0a3" containerName="extract-content" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.845553 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f714bdcc-9476-4ed4-a164-5e6e5657e0a3" containerName="extract-content" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.845559 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70239f99-6b21-408d-9506-f1b9afbafd90" containerName="extract-utilities" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.845565 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="70239f99-6b21-408d-9506-f1b9afbafd90" containerName="extract-utilities" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.845723 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638fc62a-4504-4762-bbaa-e7a6a6dfbca0" containerName="registry-server" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.845732 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="638fc62a-4504-4762-bbaa-e7a6a6dfbca0" containerName="registry-server" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.845741 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f714bdcc-9476-4ed4-a164-5e6e5657e0a3" containerName="extract-utilities" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.845748 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f714bdcc-9476-4ed4-a164-5e6e5657e0a3" containerName="extract-utilities" Dec 02 14:32:48 crc kubenswrapper[4814]: E1202 14:32:48.845772 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9788e9-39e1-4f60-be8a-891c0c224019" containerName="extract-utilities" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.845778 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9788e9-39e1-4f60-be8a-891c0c224019" containerName="extract-utilities" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.846012 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="70239f99-6b21-408d-9506-f1b9afbafd90" containerName="registry-server" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.846027 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="638fc62a-4504-4762-bbaa-e7a6a6dfbca0" containerName="registry-server" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.846037 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6316d49-679c-4393-bae7-5ca27876cce4" containerName="marketplace-operator" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.846043 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9788e9-39e1-4f60-be8a-891c0c224019" containerName="registry-server" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.846052 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f714bdcc-9476-4ed4-a164-5e6e5657e0a3" containerName="registry-server" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.847774 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" event={"ID":"cd894607-26ea-4312-98b1-99c3cdcd522a","Type":"ContainerStarted","Data":"a272bdd879a3d1e891fd842befe5f35a12600e54f8ac7ce99c9c2dd742a00b4b"} Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.847807 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" event={"ID":"cd894607-26ea-4312-98b1-99c3cdcd522a","Type":"ContainerStarted","Data":"7e55698f800878a5ce51de0dc8dcd1f0766fdd45309e8b1534f2704c10e0506d"} Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.847826 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.847892 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.849748 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.855339 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.863858 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-csss6"] Dec 02 14:32:48 crc kubenswrapper[4814]: I1202 14:32:48.864811 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" podStartSLOduration=2.8647876549999998 podStartE2EDuration="2.864787655s" podCreationTimestamp="2025-12-02 14:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:32:48.862821015 +0000 UTC m=+230.987183219" watchObservedRunningTime="2025-12-02 14:32:48.864787655 +0000 UTC m=+230.989149849" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.006341 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed0035d3-3c24-4850-a7d5-e606377481bb-utilities\") pod \"certified-operators-csss6\" (UID: \"ed0035d3-3c24-4850-a7d5-e606377481bb\") " pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.006396 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed0035d3-3c24-4850-a7d5-e606377481bb-catalog-content\") pod \"certified-operators-csss6\" (UID: \"ed0035d3-3c24-4850-a7d5-e606377481bb\") " pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.006437 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m27wf\" (UniqueName: \"kubernetes.io/projected/ed0035d3-3c24-4850-a7d5-e606377481bb-kube-api-access-m27wf\") pod \"certified-operators-csss6\" (UID: \"ed0035d3-3c24-4850-a7d5-e606377481bb\") " pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.107724 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed0035d3-3c24-4850-a7d5-e606377481bb-utilities\") pod \"certified-operators-csss6\" (UID: \"ed0035d3-3c24-4850-a7d5-e606377481bb\") " pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.107782 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed0035d3-3c24-4850-a7d5-e606377481bb-catalog-content\") pod \"certified-operators-csss6\" (UID: \"ed0035d3-3c24-4850-a7d5-e606377481bb\") " pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.107815 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m27wf\" (UniqueName: \"kubernetes.io/projected/ed0035d3-3c24-4850-a7d5-e606377481bb-kube-api-access-m27wf\") pod \"certified-operators-csss6\" (UID: \"ed0035d3-3c24-4850-a7d5-e606377481bb\") " pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.109071 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed0035d3-3c24-4850-a7d5-e606377481bb-utilities\") pod \"certified-operators-csss6\" (UID: \"ed0035d3-3c24-4850-a7d5-e606377481bb\") " pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.109311 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed0035d3-3c24-4850-a7d5-e606377481bb-catalog-content\") pod \"certified-operators-csss6\" (UID: \"ed0035d3-3c24-4850-a7d5-e606377481bb\") " pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.135781 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m27wf\" (UniqueName: \"kubernetes.io/projected/ed0035d3-3c24-4850-a7d5-e606377481bb-kube-api-access-m27wf\") pod \"certified-operators-csss6\" (UID: \"ed0035d3-3c24-4850-a7d5-e606377481bb\") " pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.170733 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.444022 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9xllv"] Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.445093 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.447019 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.459860 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xllv"] Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.605148 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-csss6"] Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.615479 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cea53aaa-316d-439c-9269-0d09e8d23eef-utilities\") pod \"redhat-marketplace-9xllv\" (UID: \"cea53aaa-316d-439c-9269-0d09e8d23eef\") " pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.615554 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g857j\" (UniqueName: \"kubernetes.io/projected/cea53aaa-316d-439c-9269-0d09e8d23eef-kube-api-access-g857j\") pod \"redhat-marketplace-9xllv\" (UID: \"cea53aaa-316d-439c-9269-0d09e8d23eef\") " pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.615744 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cea53aaa-316d-439c-9269-0d09e8d23eef-catalog-content\") pod \"redhat-marketplace-9xllv\" (UID: \"cea53aaa-316d-439c-9269-0d09e8d23eef\") " pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.717982 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cea53aaa-316d-439c-9269-0d09e8d23eef-utilities\") pod \"redhat-marketplace-9xllv\" (UID: \"cea53aaa-316d-439c-9269-0d09e8d23eef\") " pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.718114 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g857j\" (UniqueName: \"kubernetes.io/projected/cea53aaa-316d-439c-9269-0d09e8d23eef-kube-api-access-g857j\") pod \"redhat-marketplace-9xllv\" (UID: \"cea53aaa-316d-439c-9269-0d09e8d23eef\") " pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.718184 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cea53aaa-316d-439c-9269-0d09e8d23eef-catalog-content\") pod \"redhat-marketplace-9xllv\" (UID: \"cea53aaa-316d-439c-9269-0d09e8d23eef\") " pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.719318 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cea53aaa-316d-439c-9269-0d09e8d23eef-utilities\") pod \"redhat-marketplace-9xllv\" (UID: \"cea53aaa-316d-439c-9269-0d09e8d23eef\") " pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.719737 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cea53aaa-316d-439c-9269-0d09e8d23eef-catalog-content\") pod \"redhat-marketplace-9xllv\" (UID: \"cea53aaa-316d-439c-9269-0d09e8d23eef\") " pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.745537 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g857j\" (UniqueName: \"kubernetes.io/projected/cea53aaa-316d-439c-9269-0d09e8d23eef-kube-api-access-g857j\") pod \"redhat-marketplace-9xllv\" (UID: \"cea53aaa-316d-439c-9269-0d09e8d23eef\") " pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.768338 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.858934 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csss6" event={"ID":"ed0035d3-3c24-4850-a7d5-e606377481bb","Type":"ContainerStarted","Data":"5068717251c45991550c1cfb546a8752dafdf7852fb45f9a305c81a08f691985"} Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.858988 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csss6" event={"ID":"ed0035d3-3c24-4850-a7d5-e606377481bb","Type":"ContainerStarted","Data":"eab6c134fd0cb0fdab9d2f9700a30407588ed2f6edb124d61220bd5d07d2480f"} Dec 02 14:32:49 crc kubenswrapper[4814]: I1202 14:32:49.959554 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xllv"] Dec 02 14:32:50 crc kubenswrapper[4814]: I1202 14:32:50.151968 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f714bdcc-9476-4ed4-a164-5e6e5657e0a3" path="/var/lib/kubelet/pods/f714bdcc-9476-4ed4-a164-5e6e5657e0a3/volumes" Dec 02 14:32:50 crc kubenswrapper[4814]: I1202 14:32:50.651047 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:32:50 crc kubenswrapper[4814]: I1202 14:32:50.651122 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:32:50 crc kubenswrapper[4814]: I1202 14:32:50.651182 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:32:50 crc kubenswrapper[4814]: I1202 14:32:50.652349 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2"} pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:32:50 crc kubenswrapper[4814]: I1202 14:32:50.652424 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" containerID="cri-o://0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2" gracePeriod=600 Dec 02 14:32:50 crc kubenswrapper[4814]: I1202 14:32:50.869280 4814 generic.go:334] "Generic (PLEG): container finished" podID="cea53aaa-316d-439c-9269-0d09e8d23eef" containerID="794a31472a96b1390e5061f120a61c91b0a9345b0136954a1897808353a8c184" exitCode=0 Dec 02 14:32:50 crc kubenswrapper[4814]: I1202 14:32:50.869389 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xllv" event={"ID":"cea53aaa-316d-439c-9269-0d09e8d23eef","Type":"ContainerDied","Data":"794a31472a96b1390e5061f120a61c91b0a9345b0136954a1897808353a8c184"} Dec 02 14:32:50 crc kubenswrapper[4814]: I1202 14:32:50.869775 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xllv" event={"ID":"cea53aaa-316d-439c-9269-0d09e8d23eef","Type":"ContainerStarted","Data":"a2096f02d96df121930d81301443c01654d1282a2e13aa662a2c135cdad3d8ee"} Dec 02 14:32:50 crc kubenswrapper[4814]: I1202 14:32:50.874098 4814 generic.go:334] "Generic (PLEG): container finished" podID="ed0035d3-3c24-4850-a7d5-e606377481bb" containerID="5068717251c45991550c1cfb546a8752dafdf7852fb45f9a305c81a08f691985" exitCode=0 Dec 02 14:32:50 crc kubenswrapper[4814]: I1202 14:32:50.874177 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csss6" event={"ID":"ed0035d3-3c24-4850-a7d5-e606377481bb","Type":"ContainerDied","Data":"5068717251c45991550c1cfb546a8752dafdf7852fb45f9a305c81a08f691985"} Dec 02 14:32:50 crc kubenswrapper[4814]: I1202 14:32:50.881626 4814 generic.go:334] "Generic (PLEG): container finished" podID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerID="0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2" exitCode=0 Dec 02 14:32:50 crc kubenswrapper[4814]: I1202 14:32:50.882334 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerDied","Data":"0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2"} Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.245959 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bntmr"] Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.247477 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.250071 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.259183 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bntmr"] Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.340266 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbp7h\" (UniqueName: \"kubernetes.io/projected/6d3328f3-af2c-4579-929d-f9857cbccdb9-kube-api-access-xbp7h\") pod \"redhat-operators-bntmr\" (UID: \"6d3328f3-af2c-4579-929d-f9857cbccdb9\") " pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.340451 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d3328f3-af2c-4579-929d-f9857cbccdb9-utilities\") pod \"redhat-operators-bntmr\" (UID: \"6d3328f3-af2c-4579-929d-f9857cbccdb9\") " pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.340585 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d3328f3-af2c-4579-929d-f9857cbccdb9-catalog-content\") pod \"redhat-operators-bntmr\" (UID: \"6d3328f3-af2c-4579-929d-f9857cbccdb9\") " pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.442428 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d3328f3-af2c-4579-929d-f9857cbccdb9-catalog-content\") pod \"redhat-operators-bntmr\" (UID: \"6d3328f3-af2c-4579-929d-f9857cbccdb9\") " pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.442519 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbp7h\" (UniqueName: \"kubernetes.io/projected/6d3328f3-af2c-4579-929d-f9857cbccdb9-kube-api-access-xbp7h\") pod \"redhat-operators-bntmr\" (UID: \"6d3328f3-af2c-4579-929d-f9857cbccdb9\") " pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.442577 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d3328f3-af2c-4579-929d-f9857cbccdb9-utilities\") pod \"redhat-operators-bntmr\" (UID: \"6d3328f3-af2c-4579-929d-f9857cbccdb9\") " pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.443669 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d3328f3-af2c-4579-929d-f9857cbccdb9-catalog-content\") pod \"redhat-operators-bntmr\" (UID: \"6d3328f3-af2c-4579-929d-f9857cbccdb9\") " pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.443922 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d3328f3-af2c-4579-929d-f9857cbccdb9-utilities\") pod \"redhat-operators-bntmr\" (UID: \"6d3328f3-af2c-4579-929d-f9857cbccdb9\") " pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.466371 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbp7h\" (UniqueName: \"kubernetes.io/projected/6d3328f3-af2c-4579-929d-f9857cbccdb9-kube-api-access-xbp7h\") pod \"redhat-operators-bntmr\" (UID: \"6d3328f3-af2c-4579-929d-f9857cbccdb9\") " pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.581519 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.826809 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bntmr"] Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.847988 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2sf56"] Dec 02 14:32:51 crc kubenswrapper[4814]: W1202 14:32:51.848853 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d3328f3_af2c_4579_929d_f9857cbccdb9.slice/crio-c78710cc46c8e11b53c82ab4c318ab0e2df89b23493222d055d9170abf583625 WatchSource:0}: Error finding container c78710cc46c8e11b53c82ab4c318ab0e2df89b23493222d055d9170abf583625: Status 404 returned error can't find the container with id c78710cc46c8e11b53c82ab4c318ab0e2df89b23493222d055d9170abf583625 Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.849107 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.852611 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.867154 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2sf56"] Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.889112 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bntmr" event={"ID":"6d3328f3-af2c-4579-929d-f9857cbccdb9","Type":"ContainerStarted","Data":"c78710cc46c8e11b53c82ab4c318ab0e2df89b23493222d055d9170abf583625"} Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.897808 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"4d6d34edeed3132fe16646589dc9a6700a81ed66231cd064d17aa7b5c701ae63"} Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.952438 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4566b4a-d525-4e3c-8a3b-b1825d8e736e-catalog-content\") pod \"community-operators-2sf56\" (UID: \"a4566b4a-d525-4e3c-8a3b-b1825d8e736e\") " pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.952932 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4566b4a-d525-4e3c-8a3b-b1825d8e736e-utilities\") pod \"community-operators-2sf56\" (UID: \"a4566b4a-d525-4e3c-8a3b-b1825d8e736e\") " pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:32:51 crc kubenswrapper[4814]: I1202 14:32:51.952980 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbwpk\" (UniqueName: \"kubernetes.io/projected/a4566b4a-d525-4e3c-8a3b-b1825d8e736e-kube-api-access-gbwpk\") pod \"community-operators-2sf56\" (UID: \"a4566b4a-d525-4e3c-8a3b-b1825d8e736e\") " pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.056284 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4566b4a-d525-4e3c-8a3b-b1825d8e736e-utilities\") pod \"community-operators-2sf56\" (UID: \"a4566b4a-d525-4e3c-8a3b-b1825d8e736e\") " pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.057051 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbwpk\" (UniqueName: \"kubernetes.io/projected/a4566b4a-d525-4e3c-8a3b-b1825d8e736e-kube-api-access-gbwpk\") pod \"community-operators-2sf56\" (UID: \"a4566b4a-d525-4e3c-8a3b-b1825d8e736e\") " pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.057148 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4566b4a-d525-4e3c-8a3b-b1825d8e736e-catalog-content\") pod \"community-operators-2sf56\" (UID: \"a4566b4a-d525-4e3c-8a3b-b1825d8e736e\") " pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.056948 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4566b4a-d525-4e3c-8a3b-b1825d8e736e-utilities\") pod \"community-operators-2sf56\" (UID: \"a4566b4a-d525-4e3c-8a3b-b1825d8e736e\") " pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.059379 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4566b4a-d525-4e3c-8a3b-b1825d8e736e-catalog-content\") pod \"community-operators-2sf56\" (UID: \"a4566b4a-d525-4e3c-8a3b-b1825d8e736e\") " pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.090193 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbwpk\" (UniqueName: \"kubernetes.io/projected/a4566b4a-d525-4e3c-8a3b-b1825d8e736e-kube-api-access-gbwpk\") pod \"community-operators-2sf56\" (UID: \"a4566b4a-d525-4e3c-8a3b-b1825d8e736e\") " pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.191028 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.615045 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2sf56"] Dec 02 14:32:52 crc kubenswrapper[4814]: W1202 14:32:52.620324 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4566b4a_d525_4e3c_8a3b_b1825d8e736e.slice/crio-1b947beb075eb72caeccde6af6e69e9fc82877458a1c351747961e9b3d369303 WatchSource:0}: Error finding container 1b947beb075eb72caeccde6af6e69e9fc82877458a1c351747961e9b3d369303: Status 404 returned error can't find the container with id 1b947beb075eb72caeccde6af6e69e9fc82877458a1c351747961e9b3d369303 Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.911062 4814 generic.go:334] "Generic (PLEG): container finished" podID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" containerID="cbd00b3460c61e733f94b636e27c291e53d28ee00fa4e427e41f42919c72a421" exitCode=0 Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.911327 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sf56" event={"ID":"a4566b4a-d525-4e3c-8a3b-b1825d8e736e","Type":"ContainerDied","Data":"cbd00b3460c61e733f94b636e27c291e53d28ee00fa4e427e41f42919c72a421"} Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.911708 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sf56" event={"ID":"a4566b4a-d525-4e3c-8a3b-b1825d8e736e","Type":"ContainerStarted","Data":"1b947beb075eb72caeccde6af6e69e9fc82877458a1c351747961e9b3d369303"} Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.915082 4814 generic.go:334] "Generic (PLEG): container finished" podID="6d3328f3-af2c-4579-929d-f9857cbccdb9" containerID="9565bc818e08ca355ac8a8a9037fbd6be17f1dd0bc21c4dd9db83f6e9dfb0908" exitCode=0 Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.915194 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bntmr" event={"ID":"6d3328f3-af2c-4579-929d-f9857cbccdb9","Type":"ContainerDied","Data":"9565bc818e08ca355ac8a8a9037fbd6be17f1dd0bc21c4dd9db83f6e9dfb0908"} Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.921018 4814 generic.go:334] "Generic (PLEG): container finished" podID="ed0035d3-3c24-4850-a7d5-e606377481bb" containerID="1d3ee69918faa0e39b2bfa2f59456ce156770ef240f7d44364d4fddaf47fb69f" exitCode=0 Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.921081 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csss6" event={"ID":"ed0035d3-3c24-4850-a7d5-e606377481bb","Type":"ContainerDied","Data":"1d3ee69918faa0e39b2bfa2f59456ce156770ef240f7d44364d4fddaf47fb69f"} Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.929509 4814 generic.go:334] "Generic (PLEG): container finished" podID="cea53aaa-316d-439c-9269-0d09e8d23eef" containerID="ecf6da6cba355993336000f2fabc30e1241abcae89d1e655a99be6026002cc5c" exitCode=0 Dec 02 14:32:52 crc kubenswrapper[4814]: I1202 14:32:52.931505 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xllv" event={"ID":"cea53aaa-316d-439c-9269-0d09e8d23eef","Type":"ContainerDied","Data":"ecf6da6cba355993336000f2fabc30e1241abcae89d1e655a99be6026002cc5c"} Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.731665 4814 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.733169 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.734724 4814 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.735073 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6" gracePeriod=15 Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.735251 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95" gracePeriod=15 Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.735269 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf" gracePeriod=15 Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.735343 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a" gracePeriod=15 Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.735338 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948" gracePeriod=15 Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.736350 4814 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 14:32:53 crc kubenswrapper[4814]: E1202 14:32:53.736657 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.736681 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 14:32:53 crc kubenswrapper[4814]: E1202 14:32:53.736697 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.736704 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 02 14:32:53 crc kubenswrapper[4814]: E1202 14:32:53.736713 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.736720 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 14:32:53 crc kubenswrapper[4814]: E1202 14:32:53.736731 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.736737 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 14:32:53 crc kubenswrapper[4814]: E1202 14:32:53.736747 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.736752 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 14:32:53 crc kubenswrapper[4814]: E1202 14:32:53.736759 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.736767 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 14:32:53 crc kubenswrapper[4814]: E1202 14:32:53.736780 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.736785 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.736891 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.736901 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.736912 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.736922 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.736932 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.736943 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.785903 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.887077 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.887706 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.887777 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.887855 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.887896 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.887920 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.887948 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.888019 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.940686 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xllv" event={"ID":"cea53aaa-316d-439c-9269-0d09e8d23eef","Type":"ContainerStarted","Data":"62817c5544768084c92162c622ee0d2675b4038838dfa02c6ffeaa3defe49d5a"} Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.942260 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.942894 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.943514 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.944194 4814 generic.go:334] "Generic (PLEG): container finished" podID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" containerID="b97aa824654f086e6eee1a963ac25c24c0adbeacac51a8d6a2e2bae5ddc4ca83" exitCode=0 Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.944263 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd","Type":"ContainerDied","Data":"b97aa824654f086e6eee1a963ac25c24c0adbeacac51a8d6a2e2bae5ddc4ca83"} Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.944899 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.945571 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.945869 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.946124 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.951732 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.953299 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.954039 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf" exitCode=0 Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.954099 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a" exitCode=0 Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.954106 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948" exitCode=0 Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.954114 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95" exitCode=2 Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.954194 4814 scope.go:117] "RemoveContainer" containerID="077897fa2a7ea0c50188966c133a314354394afcd677ea835fb06030a68c4b67" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.958610 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csss6" event={"ID":"ed0035d3-3c24-4850-a7d5-e606377481bb","Type":"ContainerStarted","Data":"11b7792bb355d32b22fdc79e743364ff475efac5d42a04d7d271411e03ef617c"} Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.959437 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.959955 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.960438 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.960744 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.961071 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.989528 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.989595 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.989650 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.989668 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.989665 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.989735 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.989738 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.989801 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.989767 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.989856 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.989867 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.989990 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.989886 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.990054 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.990122 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:53 crc kubenswrapper[4814]: I1202 14:32:53.990209 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.086144 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:32:54 crc kubenswrapper[4814]: W1202 14:32:54.107883 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-ac5443903d41fd40c7686bcdbc4042efdc4276eb24578ef008c2be1e67fee1f5 WatchSource:0}: Error finding container ac5443903d41fd40c7686bcdbc4042efdc4276eb24578ef008c2be1e67fee1f5: Status 404 returned error can't find the container with id ac5443903d41fd40c7686bcdbc4042efdc4276eb24578ef008c2be1e67fee1f5 Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.969285 4814 generic.go:334] "Generic (PLEG): container finished" podID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" containerID="696ab48be7192cd346cc5a2191f1dfe41f26527e9da2b08895d5f7f3a85fdf25" exitCode=0 Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.969359 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sf56" event={"ID":"a4566b4a-d525-4e3c-8a3b-b1825d8e736e","Type":"ContainerDied","Data":"696ab48be7192cd346cc5a2191f1dfe41f26527e9da2b08895d5f7f3a85fdf25"} Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.970399 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.970590 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.970751 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.970906 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.971065 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.980021 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.984174 4814 generic.go:334] "Generic (PLEG): container finished" podID="6d3328f3-af2c-4579-929d-f9857cbccdb9" containerID="e48545731bdee994c5cba1ffefa6b52ccd9f762a49439ac2db218da6f4727255" exitCode=0 Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.984299 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bntmr" event={"ID":"6d3328f3-af2c-4579-929d-f9857cbccdb9","Type":"ContainerDied","Data":"e48545731bdee994c5cba1ffefa6b52ccd9f762a49439ac2db218da6f4727255"} Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.985111 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.985483 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.985818 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.986383 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.986651 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.987020 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.988494 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"254033a625592ab3fe47d14d99b95630e1e619f9a1283a473549336669213cc6"} Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.988523 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ac5443903d41fd40c7686bcdbc4042efdc4276eb24578ef008c2be1e67fee1f5"} Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.990259 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.990476 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.990807 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.991144 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.991364 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:54 crc kubenswrapper[4814]: I1202 14:32:54.991539 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:55 crc kubenswrapper[4814]: I1202 14:32:55.995663 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd","Type":"ContainerDied","Data":"23e878db16961553277cb0cbd5a59c9913ee678dbd48210546b4146e807ddfad"} Dec 02 14:32:55 crc kubenswrapper[4814]: I1202 14:32:55.996559 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23e878db16961553277cb0cbd5a59c9913ee678dbd48210546b4146e807ddfad" Dec 02 14:32:56 crc kubenswrapper[4814]: E1202 14:32:56.186458 4814 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.13:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" volumeName="registry-storage" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.629543 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.630365 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.630812 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.631131 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.631438 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.631721 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.632157 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.664897 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.665939 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.666486 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.666728 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.666965 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.667415 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.668007 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.668426 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.668874 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.745629 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-kubelet-dir\") pod \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\" (UID: \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\") " Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.746157 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-kube-api-access\") pod \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\" (UID: \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\") " Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.746299 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.746399 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-var-lock\") pod \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\" (UID: \"addefb6b-3fbe-4472-81cf-2e8ade2d4bfd\") " Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.746546 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.746686 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.745970 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" (UID: "addefb6b-3fbe-4472-81cf-2e8ade2d4bfd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.747116 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.747279 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-var-lock" (OuterVolumeSpecName: "var-lock") pod "addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" (UID: "addefb6b-3fbe-4472-81cf-2e8ade2d4bfd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.747310 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.747330 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.754421 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" (UID: "addefb6b-3fbe-4472-81cf-2e8ade2d4bfd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.848222 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.848539 4814 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.848603 4814 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-var-lock\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.848662 4814 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.848718 4814 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:56 crc kubenswrapper[4814]: I1202 14:32:56.848775 4814 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/addefb6b-3fbe-4472-81cf-2e8ade2d4bfd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.003919 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sf56" event={"ID":"a4566b4a-d525-4e3c-8a3b-b1825d8e736e","Type":"ContainerStarted","Data":"4498a48af994d4bbcb6aa78f044c413dc2b6822ddd77f6216b565244b1820a99"} Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.004623 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.005031 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.005267 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.005429 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.005597 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.005905 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.006107 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.007169 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.008002 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6" exitCode=0 Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.008100 4814 scope.go:117] "RemoveContainer" containerID="04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.008119 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.014557 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bntmr" event={"ID":"6d3328f3-af2c-4579-929d-f9857cbccdb9","Type":"ContainerStarted","Data":"345a116abf31fd1325751478b1655265ffc1a599a3eca76578a63078260ec272"} Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.014666 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.015167 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.015645 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.016051 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.016343 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.016826 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.017108 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.017405 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.032301 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.032891 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.033272 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.033796 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.034171 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.034564 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.035041 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.036994 4814 scope.go:117] "RemoveContainer" containerID="6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.049306 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.050668 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.051479 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.051551 4814 scope.go:117] "RemoveContainer" containerID="302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.051851 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.052350 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.052726 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.052930 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.066807 4814 scope.go:117] "RemoveContainer" containerID="5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.088645 4814 scope.go:117] "RemoveContainer" containerID="68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.104758 4814 scope.go:117] "RemoveContainer" containerID="86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.134558 4814 scope.go:117] "RemoveContainer" containerID="04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf" Dec 02 14:32:57 crc kubenswrapper[4814]: E1202 14:32:57.140636 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\": container with ID starting with 04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf not found: ID does not exist" containerID="04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.140686 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf"} err="failed to get container status \"04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\": rpc error: code = NotFound desc = could not find container \"04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf\": container with ID starting with 04e2e516cad3f3b2ebe6bb9655e0fb70af9c1e1b00b7eb751a9a81ebff42aabf not found: ID does not exist" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.140725 4814 scope.go:117] "RemoveContainer" containerID="6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a" Dec 02 14:32:57 crc kubenswrapper[4814]: E1202 14:32:57.140985 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\": container with ID starting with 6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a not found: ID does not exist" containerID="6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.141019 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a"} err="failed to get container status \"6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\": rpc error: code = NotFound desc = could not find container \"6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a\": container with ID starting with 6a1413621b8e3cfcb24cb613c298802474db75691867dd42d1666cd7b1e0038a not found: ID does not exist" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.141039 4814 scope.go:117] "RemoveContainer" containerID="302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948" Dec 02 14:32:57 crc kubenswrapper[4814]: E1202 14:32:57.141311 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\": container with ID starting with 302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948 not found: ID does not exist" containerID="302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.141341 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948"} err="failed to get container status \"302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\": rpc error: code = NotFound desc = could not find container \"302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948\": container with ID starting with 302f0f710126f672debc45a5d174bebfe699ed7331ffecc095ac72e664876948 not found: ID does not exist" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.141373 4814 scope.go:117] "RemoveContainer" containerID="5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95" Dec 02 14:32:57 crc kubenswrapper[4814]: E1202 14:32:57.141602 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\": container with ID starting with 5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95 not found: ID does not exist" containerID="5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.141636 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95"} err="failed to get container status \"5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\": rpc error: code = NotFound desc = could not find container \"5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95\": container with ID starting with 5ae6141053fefbc81ff1233a013878ee5e7c72f778a6c9361b1df45cc86b4b95 not found: ID does not exist" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.141654 4814 scope.go:117] "RemoveContainer" containerID="68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6" Dec 02 14:32:57 crc kubenswrapper[4814]: E1202 14:32:57.141896 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\": container with ID starting with 68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6 not found: ID does not exist" containerID="68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.141921 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6"} err="failed to get container status \"68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\": rpc error: code = NotFound desc = could not find container \"68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6\": container with ID starting with 68a265f4c3ae25ada34e7b153039ebc7e63822388b45cdf682ffce367b9561d6 not found: ID does not exist" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.141939 4814 scope.go:117] "RemoveContainer" containerID="86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4" Dec 02 14:32:57 crc kubenswrapper[4814]: E1202 14:32:57.142240 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\": container with ID starting with 86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4 not found: ID does not exist" containerID="86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4" Dec 02 14:32:57 crc kubenswrapper[4814]: I1202 14:32:57.142270 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4"} err="failed to get container status \"86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\": rpc error: code = NotFound desc = could not find container \"86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4\": container with ID starting with 86932ac440f15ed184af3d130ac2f04dc74a6c656a86bcc6a379b99c220f59b4 not found: ID does not exist" Dec 02 14:32:58 crc kubenswrapper[4814]: I1202 14:32:58.147851 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:58 crc kubenswrapper[4814]: I1202 14:32:58.148048 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:58 crc kubenswrapper[4814]: I1202 14:32:58.148209 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:58 crc kubenswrapper[4814]: I1202 14:32:58.148377 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:58 crc kubenswrapper[4814]: I1202 14:32:58.148514 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:58 crc kubenswrapper[4814]: I1202 14:32:58.148649 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:58 crc kubenswrapper[4814]: I1202 14:32:58.149189 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:58 crc kubenswrapper[4814]: I1202 14:32:58.159999 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 02 14:32:58 crc kubenswrapper[4814]: E1202 14:32:58.768110 4814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-bntmr.187d6c8b1e5597cc openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-bntmr,UID:6d3328f3-af2c-4579-929d-f9857cbccdb9,APIVersion:v1,ResourceVersion:29531,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\" in 845ms (845ms including waiting). Image size: 1607829346 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 14:32:53.764282316 +0000 UTC m=+235.888644520,LastTimestamp:2025-12-02 14:32:53.764282316 +0000 UTC m=+235.888644520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 14:32:58 crc kubenswrapper[4814]: E1202 14:32:58.768277 4814 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:58 crc kubenswrapper[4814]: E1202 14:32:58.768768 4814 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:58 crc kubenswrapper[4814]: E1202 14:32:58.769115 4814 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:58 crc kubenswrapper[4814]: E1202 14:32:58.769517 4814 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:58 crc kubenswrapper[4814]: E1202 14:32:58.769724 4814 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:58 crc kubenswrapper[4814]: I1202 14:32:58.769753 4814 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 02 14:32:58 crc kubenswrapper[4814]: E1202 14:32:58.769912 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="200ms" Dec 02 14:32:58 crc kubenswrapper[4814]: E1202 14:32:58.901130 4814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-bntmr.187d6c8b1e5597cc openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-bntmr,UID:6d3328f3-af2c-4579-929d-f9857cbccdb9,APIVersion:v1,ResourceVersion:29531,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\" in 845ms (845ms including waiting). Image size: 1607829346 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 14:32:53.764282316 +0000 UTC m=+235.888644520,LastTimestamp:2025-12-02 14:32:53.764282316 +0000 UTC m=+235.888644520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 14:32:58 crc kubenswrapper[4814]: E1202 14:32:58.970786 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="400ms" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.171684 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.171740 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.246997 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.247594 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.247906 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.249998 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.250469 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.250768 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.251042 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:59 crc kubenswrapper[4814]: E1202 14:32:59.372295 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="800ms" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.768674 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.768747 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.810897 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.812074 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.812637 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.813189 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.813558 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.813931 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:32:59 crc kubenswrapper[4814]: I1202 14:32:59.814294 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:00 crc kubenswrapper[4814]: I1202 14:33:00.071611 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-csss6" Dec 02 14:33:00 crc kubenswrapper[4814]: I1202 14:33:00.072589 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:00 crc kubenswrapper[4814]: I1202 14:33:00.072904 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:00 crc kubenswrapper[4814]: I1202 14:33:00.073396 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:00 crc kubenswrapper[4814]: I1202 14:33:00.073666 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:00 crc kubenswrapper[4814]: I1202 14:33:00.073889 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:00 crc kubenswrapper[4814]: I1202 14:33:00.074105 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:00 crc kubenswrapper[4814]: I1202 14:33:00.079582 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9xllv" Dec 02 14:33:00 crc kubenswrapper[4814]: I1202 14:33:00.079993 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:00 crc kubenswrapper[4814]: I1202 14:33:00.080156 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:00 crc kubenswrapper[4814]: I1202 14:33:00.080444 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:00 crc kubenswrapper[4814]: I1202 14:33:00.080806 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:00 crc kubenswrapper[4814]: I1202 14:33:00.083465 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:00 crc kubenswrapper[4814]: I1202 14:33:00.083710 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:00 crc kubenswrapper[4814]: E1202 14:33:00.174223 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="1.6s" Dec 02 14:33:01 crc kubenswrapper[4814]: I1202 14:33:01.582687 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:33:01 crc kubenswrapper[4814]: I1202 14:33:01.583444 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:33:01 crc kubenswrapper[4814]: E1202 14:33:01.775318 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="3.2s" Dec 02 14:33:02 crc kubenswrapper[4814]: I1202 14:33:02.192631 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:33:02 crc kubenswrapper[4814]: I1202 14:33:02.192726 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:33:02 crc kubenswrapper[4814]: I1202 14:33:02.236069 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:33:02 crc kubenswrapper[4814]: I1202 14:33:02.237120 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:02 crc kubenswrapper[4814]: I1202 14:33:02.237597 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:02 crc kubenswrapper[4814]: I1202 14:33:02.237966 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:02 crc kubenswrapper[4814]: I1202 14:33:02.238508 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:02 crc kubenswrapper[4814]: I1202 14:33:02.239341 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:02 crc kubenswrapper[4814]: I1202 14:33:02.239800 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:02 crc kubenswrapper[4814]: I1202 14:33:02.637634 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bntmr" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" containerName="registry-server" probeResult="failure" output=< Dec 02 14:33:02 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Dec 02 14:33:02 crc kubenswrapper[4814]: > Dec 02 14:33:03 crc kubenswrapper[4814]: I1202 14:33:03.121097 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2sf56" Dec 02 14:33:03 crc kubenswrapper[4814]: I1202 14:33:03.121778 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:03 crc kubenswrapper[4814]: I1202 14:33:03.122438 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:03 crc kubenswrapper[4814]: I1202 14:33:03.122938 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:03 crc kubenswrapper[4814]: I1202 14:33:03.123264 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:03 crc kubenswrapper[4814]: I1202 14:33:03.124346 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:03 crc kubenswrapper[4814]: I1202 14:33:03.124712 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:04 crc kubenswrapper[4814]: E1202 14:33:04.976036 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="6.4s" Dec 02 14:33:05 crc kubenswrapper[4814]: I1202 14:33:05.143865 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:33:05 crc kubenswrapper[4814]: I1202 14:33:05.145208 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:05 crc kubenswrapper[4814]: I1202 14:33:05.145853 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:05 crc kubenswrapper[4814]: I1202 14:33:05.146599 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:05 crc kubenswrapper[4814]: I1202 14:33:05.147274 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:05 crc kubenswrapper[4814]: I1202 14:33:05.147559 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:05 crc kubenswrapper[4814]: I1202 14:33:05.148099 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:05 crc kubenswrapper[4814]: I1202 14:33:05.169877 4814 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aed12756-8a1c-422e-9a9f-87314f49c27d" Dec 02 14:33:05 crc kubenswrapper[4814]: I1202 14:33:05.169933 4814 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aed12756-8a1c-422e-9a9f-87314f49c27d" Dec 02 14:33:05 crc kubenswrapper[4814]: E1202 14:33:05.170337 4814 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:33:05 crc kubenswrapper[4814]: I1202 14:33:05.170979 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:33:05 crc kubenswrapper[4814]: W1202 14:33:05.202911 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-f91b656e0abb3ffaed8ec9fc60dcc75622efbb0beee37baf405c2e02ad7ae04a WatchSource:0}: Error finding container f91b656e0abb3ffaed8ec9fc60dcc75622efbb0beee37baf405c2e02ad7ae04a: Status 404 returned error can't find the container with id f91b656e0abb3ffaed8ec9fc60dcc75622efbb0beee37baf405c2e02ad7ae04a Dec 02 14:33:06 crc kubenswrapper[4814]: I1202 14:33:06.078970 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f91b656e0abb3ffaed8ec9fc60dcc75622efbb0beee37baf405c2e02ad7ae04a"} Dec 02 14:33:06 crc kubenswrapper[4814]: I1202 14:33:06.491453 4814 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 02 14:33:06 crc kubenswrapper[4814]: I1202 14:33:06.491556 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 02 14:33:07 crc kubenswrapper[4814]: I1202 14:33:07.086513 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"132f652b9c6236e9b2f9f835c158e271e20d92369d5a789418db42c4e062f009"} Dec 02 14:33:07 crc kubenswrapper[4814]: I1202 14:33:07.087002 4814 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aed12756-8a1c-422e-9a9f-87314f49c27d" Dec 02 14:33:07 crc kubenswrapper[4814]: I1202 14:33:07.087047 4814 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aed12756-8a1c-422e-9a9f-87314f49c27d" Dec 02 14:33:07 crc kubenswrapper[4814]: E1202 14:33:07.087636 4814 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:33:07 crc kubenswrapper[4814]: I1202 14:33:07.087721 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:07 crc kubenswrapper[4814]: I1202 14:33:07.088154 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:07 crc kubenswrapper[4814]: I1202 14:33:07.088708 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:07 crc kubenswrapper[4814]: I1202 14:33:07.088963 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:07 crc kubenswrapper[4814]: I1202 14:33:07.089354 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:07 crc kubenswrapper[4814]: I1202 14:33:07.089834 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.095571 4814 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="132f652b9c6236e9b2f9f835c158e271e20d92369d5a789418db42c4e062f009" exitCode=0 Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.095629 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"132f652b9c6236e9b2f9f835c158e271e20d92369d5a789418db42c4e062f009"} Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.095839 4814 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aed12756-8a1c-422e-9a9f-87314f49c27d" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.095857 4814 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aed12756-8a1c-422e-9a9f-87314f49c27d" Dec 02 14:33:08 crc kubenswrapper[4814]: E1202 14:33:08.096293 4814 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.096882 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.097350 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.097613 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.097876 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.098064 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.098307 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.099429 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.099496 4814 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4" exitCode=1 Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.099534 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4"} Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.100148 4814 scope.go:117] "RemoveContainer" containerID="4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.100282 4814 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.100663 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.100940 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.101216 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.101456 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.101701 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.102127 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.150634 4814 status_manager.go:851] "Failed to get status for pod" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.151522 4814 status_manager.go:851] "Failed to get status for pod" podUID="ed0035d3-3c24-4850-a7d5-e606377481bb" pod="openshift-marketplace/certified-operators-csss6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-csss6\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.151704 4814 status_manager.go:851] "Failed to get status for pod" podUID="cea53aaa-316d-439c-9269-0d09e8d23eef" pod="openshift-marketplace/redhat-marketplace-9xllv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9xllv\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.151841 4814 status_manager.go:851] "Failed to get status for pod" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" pod="openshift-marketplace/redhat-operators-bntmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bntmr\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.151980 4814 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.152116 4814 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.152281 4814 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:08 crc kubenswrapper[4814]: I1202 14:33:08.152694 4814 status_manager.go:851] "Failed to get status for pod" podUID="a4566b4a-d525-4e3c-8a3b-b1825d8e736e" pod="openshift-marketplace/community-operators-2sf56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sf56\": dial tcp 38.102.83.13:6443: connect: connection refused" Dec 02 14:33:09 crc kubenswrapper[4814]: I1202 14:33:09.126962 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 14:33:09 crc kubenswrapper[4814]: I1202 14:33:09.127097 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"23e0e91e8719bccce1cbda3ddefea944a9c4cf71d13e94887955a16caddba6d2"} Dec 02 14:33:09 crc kubenswrapper[4814]: I1202 14:33:09.141243 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a2bfc2e1f0c9ca09019c82ec1f5b7f90b3b8750cde567486062d36e85692bd8b"} Dec 02 14:33:09 crc kubenswrapper[4814]: I1202 14:33:09.141295 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a6b36062072b43b15460a5c946dce419442fb06de6030869bf4fc4db704c54e2"} Dec 02 14:33:09 crc kubenswrapper[4814]: I1202 14:33:09.141306 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"955fa84b26b7a2ddb12d747d20911c932df3770bd03fb497d7dfbf8ff2a02019"} Dec 02 14:33:10 crc kubenswrapper[4814]: I1202 14:33:10.155000 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0f2421403b1a1fc00c6c4cb7bef3911aafe008731a2593a3270963f95bf4ea6f"} Dec 02 14:33:10 crc kubenswrapper[4814]: I1202 14:33:10.155986 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a8287940456524a143c0d3d712e02435b643a6933586384ed2c5b9094ceb2160"} Dec 02 14:33:10 crc kubenswrapper[4814]: I1202 14:33:10.156015 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:33:10 crc kubenswrapper[4814]: I1202 14:33:10.156004 4814 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aed12756-8a1c-422e-9a9f-87314f49c27d" Dec 02 14:33:10 crc kubenswrapper[4814]: I1202 14:33:10.156044 4814 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aed12756-8a1c-422e-9a9f-87314f49c27d" Dec 02 14:33:10 crc kubenswrapper[4814]: I1202 14:33:10.172142 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:33:10 crc kubenswrapper[4814]: I1202 14:33:10.172203 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:33:10 crc kubenswrapper[4814]: I1202 14:33:10.179695 4814 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]log ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]etcd ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/generic-apiserver-start-informers ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/priority-and-fairness-filter ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/start-apiextensions-informers ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/start-apiextensions-controllers ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/crd-informer-synced ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/start-system-namespaces-controller ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 02 14:33:10 crc kubenswrapper[4814]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 02 14:33:10 crc kubenswrapper[4814]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/bootstrap-controller ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/start-kube-aggregator-informers ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/apiservice-registration-controller ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/apiservice-discovery-controller ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]autoregister-completion ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/apiservice-openapi-controller ok Dec 02 14:33:10 crc kubenswrapper[4814]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 02 14:33:10 crc kubenswrapper[4814]: livez check failed Dec 02 14:33:10 crc kubenswrapper[4814]: I1202 14:33:10.179763 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 14:33:11 crc kubenswrapper[4814]: I1202 14:33:11.671846 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:33:11 crc kubenswrapper[4814]: I1202 14:33:11.727211 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 14:33:13 crc kubenswrapper[4814]: I1202 14:33:13.354349 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:33:13 crc kubenswrapper[4814]: I1202 14:33:13.354463 4814 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 02 14:33:13 crc kubenswrapper[4814]: I1202 14:33:13.354561 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 02 14:33:15 crc kubenswrapper[4814]: I1202 14:33:15.169057 4814 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:33:15 crc kubenswrapper[4814]: I1202 14:33:15.181393 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:33:16 crc kubenswrapper[4814]: I1202 14:33:16.196567 4814 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aed12756-8a1c-422e-9a9f-87314f49c27d" Dec 02 14:33:16 crc kubenswrapper[4814]: I1202 14:33:16.197104 4814 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aed12756-8a1c-422e-9a9f-87314f49c27d" Dec 02 14:33:16 crc kubenswrapper[4814]: I1202 14:33:16.204586 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:33:16 crc kubenswrapper[4814]: I1202 14:33:16.210119 4814 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9017cc47-1f5a-4a85-a3ee-34140d3778d2" Dec 02 14:33:16 crc kubenswrapper[4814]: I1202 14:33:16.490546 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:33:17 crc kubenswrapper[4814]: I1202 14:33:17.201389 4814 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aed12756-8a1c-422e-9a9f-87314f49c27d" Dec 02 14:33:17 crc kubenswrapper[4814]: I1202 14:33:17.201430 4814 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aed12756-8a1c-422e-9a9f-87314f49c27d" Dec 02 14:33:18 crc kubenswrapper[4814]: I1202 14:33:18.161994 4814 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9017cc47-1f5a-4a85-a3ee-34140d3778d2" Dec 02 14:33:23 crc kubenswrapper[4814]: I1202 14:33:23.354312 4814 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 02 14:33:23 crc kubenswrapper[4814]: I1202 14:33:23.354421 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 02 14:33:25 crc kubenswrapper[4814]: I1202 14:33:25.309379 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 02 14:33:25 crc kubenswrapper[4814]: I1202 14:33:25.377653 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 02 14:33:25 crc kubenswrapper[4814]: I1202 14:33:25.406356 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 02 14:33:25 crc kubenswrapper[4814]: I1202 14:33:25.656965 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 14:33:25 crc kubenswrapper[4814]: I1202 14:33:25.689157 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 02 14:33:26 crc kubenswrapper[4814]: I1202 14:33:26.396912 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 02 14:33:26 crc kubenswrapper[4814]: I1202 14:33:26.400568 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 02 14:33:26 crc kubenswrapper[4814]: I1202 14:33:26.682917 4814 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 02 14:33:26 crc kubenswrapper[4814]: I1202 14:33:26.752503 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 02 14:33:26 crc kubenswrapper[4814]: I1202 14:33:26.757143 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 02 14:33:26 crc kubenswrapper[4814]: I1202 14:33:26.819701 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 02 14:33:27 crc kubenswrapper[4814]: I1202 14:33:27.020516 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 02 14:33:27 crc kubenswrapper[4814]: I1202 14:33:27.297711 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 02 14:33:27 crc kubenswrapper[4814]: I1202 14:33:27.426678 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 14:33:27 crc kubenswrapper[4814]: I1202 14:33:27.436483 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 02 14:33:27 crc kubenswrapper[4814]: I1202 14:33:27.474755 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 02 14:33:27 crc kubenswrapper[4814]: I1202 14:33:27.516497 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 02 14:33:27 crc kubenswrapper[4814]: I1202 14:33:27.664011 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 02 14:33:27 crc kubenswrapper[4814]: I1202 14:33:27.695854 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 02 14:33:27 crc kubenswrapper[4814]: I1202 14:33:27.794172 4814 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 02 14:33:27 crc kubenswrapper[4814]: I1202 14:33:27.809992 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 02 14:33:27 crc kubenswrapper[4814]: I1202 14:33:27.961420 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 02 14:33:27 crc kubenswrapper[4814]: I1202 14:33:27.977505 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 02 14:33:27 crc kubenswrapper[4814]: I1202 14:33:27.984527 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 02 14:33:28 crc kubenswrapper[4814]: I1202 14:33:28.070036 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 14:33:28 crc kubenswrapper[4814]: I1202 14:33:28.127301 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 02 14:33:28 crc kubenswrapper[4814]: I1202 14:33:28.152555 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 02 14:33:28 crc kubenswrapper[4814]: I1202 14:33:28.164437 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 02 14:33:28 crc kubenswrapper[4814]: I1202 14:33:28.196585 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 02 14:33:28 crc kubenswrapper[4814]: I1202 14:33:28.290511 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 02 14:33:28 crc kubenswrapper[4814]: I1202 14:33:28.312154 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 02 14:33:28 crc kubenswrapper[4814]: I1202 14:33:28.358976 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 02 14:33:28 crc kubenswrapper[4814]: I1202 14:33:28.602204 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 02 14:33:28 crc kubenswrapper[4814]: I1202 14:33:28.748175 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 02 14:33:28 crc kubenswrapper[4814]: I1202 14:33:28.800605 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 02 14:33:28 crc kubenswrapper[4814]: I1202 14:33:28.876658 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 02 14:33:28 crc kubenswrapper[4814]: I1202 14:33:28.945788 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 02 14:33:28 crc kubenswrapper[4814]: I1202 14:33:28.966401 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.005295 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.051802 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.121004 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.136862 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.173576 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.199429 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.314645 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.325554 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.354664 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.640115 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.678053 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.754952 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.801399 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.848333 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.867662 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.886396 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.926880 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 02 14:33:29 crc kubenswrapper[4814]: I1202 14:33:29.966144 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.047252 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.245631 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.278254 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.280091 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.385799 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.515122 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.648338 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.756848 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.775607 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.796264 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.799482 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.850250 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.938213 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.966324 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.967193 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 02 14:33:30 crc kubenswrapper[4814]: I1202 14:33:30.978723 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.049211 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.051787 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.081795 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.125573 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.125916 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.161682 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.311864 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.330595 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.345281 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.408822 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.660976 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.678114 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.722504 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.773035 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.832735 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 02 14:33:31 crc kubenswrapper[4814]: I1202 14:33:31.958607 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.015153 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.033209 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.049658 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.075145 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.159137 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.176374 4814 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.283890 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.288502 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.394508 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.440352 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.623528 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.629274 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.720567 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.764785 4814 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.765730 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-csss6" podStartSLOduration=42.257816951 podStartE2EDuration="44.765705346s" podCreationTimestamp="2025-12-02 14:32:48 +0000 UTC" firstStartedPulling="2025-12-02 14:32:50.876598905 +0000 UTC m=+233.000961119" lastFinishedPulling="2025-12-02 14:32:53.38448732 +0000 UTC m=+235.508849514" observedRunningTime="2025-12-02 14:33:15.013276805 +0000 UTC m=+257.137639009" watchObservedRunningTime="2025-12-02 14:33:32.765705346 +0000 UTC m=+274.890067560" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.769044 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=39.769030073 podStartE2EDuration="39.769030073s" podCreationTimestamp="2025-12-02 14:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:33:15.060287768 +0000 UTC m=+257.184649982" watchObservedRunningTime="2025-12-02 14:33:32.769030073 +0000 UTC m=+274.893392287" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.769845 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bntmr" podStartSLOduration=38.754945681 podStartE2EDuration="41.769837044s" podCreationTimestamp="2025-12-02 14:32:51 +0000 UTC" firstStartedPulling="2025-12-02 14:32:52.919162818 +0000 UTC m=+235.043525032" lastFinishedPulling="2025-12-02 14:32:55.934054191 +0000 UTC m=+238.058416395" observedRunningTime="2025-12-02 14:33:15.050594808 +0000 UTC m=+257.174957012" watchObservedRunningTime="2025-12-02 14:33:32.769837044 +0000 UTC m=+274.894199258" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.770399 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2sf56" podStartSLOduration=39.154335062 podStartE2EDuration="41.770394778s" podCreationTimestamp="2025-12-02 14:32:51 +0000 UTC" firstStartedPulling="2025-12-02 14:32:52.913376129 +0000 UTC m=+235.037738343" lastFinishedPulling="2025-12-02 14:32:55.529435865 +0000 UTC m=+237.653798059" observedRunningTime="2025-12-02 14:33:14.965638756 +0000 UTC m=+257.090000960" watchObservedRunningTime="2025-12-02 14:33:32.770394778 +0000 UTC m=+274.894756992" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.770490 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9xllv" podStartSLOduration=41.169784262 podStartE2EDuration="43.77048547s" podCreationTimestamp="2025-12-02 14:32:49 +0000 UTC" firstStartedPulling="2025-12-02 14:32:50.870995421 +0000 UTC m=+232.995357645" lastFinishedPulling="2025-12-02 14:32:53.471696639 +0000 UTC m=+235.596058853" observedRunningTime="2025-12-02 14:33:15.030410977 +0000 UTC m=+257.154773181" watchObservedRunningTime="2025-12-02 14:33:32.77048547 +0000 UTC m=+274.894847684" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.770870 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.770917 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.775570 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.794322 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.794302148 podStartE2EDuration="17.794302148s" podCreationTimestamp="2025-12-02 14:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:33:32.792538662 +0000 UTC m=+274.916900906" watchObservedRunningTime="2025-12-02 14:33:32.794302148 +0000 UTC m=+274.918664352" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.800456 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.801907 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.816698 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.822455 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.863318 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.951035 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 02 14:33:32 crc kubenswrapper[4814]: I1202 14:33:32.999016 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.009652 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.240315 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.298203 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.324874 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.353292 4814 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.353392 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.353473 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.354358 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"23e0e91e8719bccce1cbda3ddefea944a9c4cf71d13e94887955a16caddba6d2"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.354545 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://23e0e91e8719bccce1cbda3ddefea944a9c4cf71d13e94887955a16caddba6d2" gracePeriod=30 Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.558995 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.572746 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.622423 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.627208 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.641122 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.709688 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.740345 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.741303 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.749607 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.749837 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.757297 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.759137 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.849768 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.866263 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 02 14:33:33 crc kubenswrapper[4814]: I1202 14:33:33.967481 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.145737 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.201800 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.294113 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.328769 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.343714 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.427171 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.530669 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.573463 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.583129 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.679858 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.683555 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.739198 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.743932 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.836811 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.935201 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.937201 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 14:33:34 crc kubenswrapper[4814]: I1202 14:33:34.983871 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.079951 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.151273 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.151886 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.179515 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.201245 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.216013 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.250513 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.277479 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.414269 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.417211 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.493662 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.554461 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.668440 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.757159 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.818807 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.884510 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.898193 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.931374 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 02 14:33:35 crc kubenswrapper[4814]: I1202 14:33:35.969199 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.001856 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.009728 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.043136 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.064817 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.109301 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.133691 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.135199 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.151916 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.172645 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.457886 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.491616 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.496412 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.513812 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.514600 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.532572 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.534894 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.579082 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.642824 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.683891 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.691672 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.713645 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.758154 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.855750 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 02 14:33:36 crc kubenswrapper[4814]: I1202 14:33:36.919929 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.032676 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.164304 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.205297 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.216886 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.234991 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.300776 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.321550 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.335656 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.354607 4814 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.575463 4814 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.575790 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://254033a625592ab3fe47d14d99b95630e1e619f9a1283a473549336669213cc6" gracePeriod=5 Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.602328 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.602558 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.651546 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.738637 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.756818 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.781080 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.787218 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.788076 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.844784 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 02 14:33:37 crc kubenswrapper[4814]: I1202 14:33:37.910807 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 02 14:33:38 crc kubenswrapper[4814]: I1202 14:33:38.042607 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 02 14:33:38 crc kubenswrapper[4814]: I1202 14:33:38.119782 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 14:33:38 crc kubenswrapper[4814]: I1202 14:33:38.185940 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 02 14:33:38 crc kubenswrapper[4814]: I1202 14:33:38.212139 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 02 14:33:38 crc kubenswrapper[4814]: I1202 14:33:38.220781 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 02 14:33:38 crc kubenswrapper[4814]: I1202 14:33:38.317654 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 02 14:33:38 crc kubenswrapper[4814]: I1202 14:33:38.406141 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 02 14:33:38 crc kubenswrapper[4814]: I1202 14:33:38.407269 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 02 14:33:38 crc kubenswrapper[4814]: I1202 14:33:38.409136 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 02 14:33:38 crc kubenswrapper[4814]: I1202 14:33:38.494353 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 02 14:33:38 crc kubenswrapper[4814]: I1202 14:33:38.826617 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 02 14:33:38 crc kubenswrapper[4814]: I1202 14:33:38.830597 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 02 14:33:38 crc kubenswrapper[4814]: I1202 14:33:38.872992 4814 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 02 14:33:38 crc kubenswrapper[4814]: I1202 14:33:38.931948 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 02 14:33:39 crc kubenswrapper[4814]: I1202 14:33:39.000446 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 02 14:33:39 crc kubenswrapper[4814]: I1202 14:33:39.026302 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 02 14:33:39 crc kubenswrapper[4814]: I1202 14:33:39.076138 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 02 14:33:39 crc kubenswrapper[4814]: I1202 14:33:39.077220 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 02 14:33:39 crc kubenswrapper[4814]: I1202 14:33:39.093997 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 02 14:33:39 crc kubenswrapper[4814]: I1202 14:33:39.094251 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 02 14:33:39 crc kubenswrapper[4814]: I1202 14:33:39.153739 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 02 14:33:39 crc kubenswrapper[4814]: I1202 14:33:39.214820 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 02 14:33:39 crc kubenswrapper[4814]: I1202 14:33:39.284857 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 02 14:33:39 crc kubenswrapper[4814]: I1202 14:33:39.291633 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 02 14:33:39 crc kubenswrapper[4814]: I1202 14:33:39.377108 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 02 14:33:39 crc kubenswrapper[4814]: I1202 14:33:39.456933 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 02 14:33:40 crc kubenswrapper[4814]: I1202 14:33:40.002287 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 14:33:40 crc kubenswrapper[4814]: I1202 14:33:40.279972 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 02 14:33:40 crc kubenswrapper[4814]: I1202 14:33:40.290661 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 02 14:33:40 crc kubenswrapper[4814]: I1202 14:33:40.290690 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 02 14:33:40 crc kubenswrapper[4814]: I1202 14:33:40.324679 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 02 14:33:40 crc kubenswrapper[4814]: I1202 14:33:40.332571 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 02 14:33:40 crc kubenswrapper[4814]: I1202 14:33:40.389426 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 02 14:33:40 crc kubenswrapper[4814]: I1202 14:33:40.477842 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 02 14:33:40 crc kubenswrapper[4814]: I1202 14:33:40.481612 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 02 14:33:40 crc kubenswrapper[4814]: I1202 14:33:40.485937 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 02 14:33:40 crc kubenswrapper[4814]: I1202 14:33:40.665396 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 02 14:33:40 crc kubenswrapper[4814]: I1202 14:33:40.724118 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 02 14:33:40 crc kubenswrapper[4814]: I1202 14:33:40.898259 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 02 14:33:41 crc kubenswrapper[4814]: I1202 14:33:41.112213 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 02 14:33:41 crc kubenswrapper[4814]: I1202 14:33:41.652406 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 02 14:33:41 crc kubenswrapper[4814]: I1202 14:33:41.695930 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.023500 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.770579 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.770667 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.781122 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.967380 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.967622 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.970163 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.970397 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.970321 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.970484 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.970571 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.970742 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.970751 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.972453 4814 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.972506 4814 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.972532 4814 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.972557 4814 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 02 14:33:42 crc kubenswrapper[4814]: I1202 14:33:42.982068 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:33:43 crc kubenswrapper[4814]: I1202 14:33:43.073852 4814 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 14:33:43 crc kubenswrapper[4814]: I1202 14:33:43.189291 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 02 14:33:43 crc kubenswrapper[4814]: I1202 14:33:43.392726 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 02 14:33:43 crc kubenswrapper[4814]: I1202 14:33:43.392792 4814 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="254033a625592ab3fe47d14d99b95630e1e619f9a1283a473549336669213cc6" exitCode=137 Dec 02 14:33:43 crc kubenswrapper[4814]: I1202 14:33:43.392859 4814 scope.go:117] "RemoveContainer" containerID="254033a625592ab3fe47d14d99b95630e1e619f9a1283a473549336669213cc6" Dec 02 14:33:43 crc kubenswrapper[4814]: I1202 14:33:43.392880 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 14:33:43 crc kubenswrapper[4814]: I1202 14:33:43.415907 4814 scope.go:117] "RemoveContainer" containerID="254033a625592ab3fe47d14d99b95630e1e619f9a1283a473549336669213cc6" Dec 02 14:33:43 crc kubenswrapper[4814]: E1202 14:33:43.417369 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"254033a625592ab3fe47d14d99b95630e1e619f9a1283a473549336669213cc6\": container with ID starting with 254033a625592ab3fe47d14d99b95630e1e619f9a1283a473549336669213cc6 not found: ID does not exist" containerID="254033a625592ab3fe47d14d99b95630e1e619f9a1283a473549336669213cc6" Dec 02 14:33:43 crc kubenswrapper[4814]: I1202 14:33:43.417425 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254033a625592ab3fe47d14d99b95630e1e619f9a1283a473549336669213cc6"} err="failed to get container status \"254033a625592ab3fe47d14d99b95630e1e619f9a1283a473549336669213cc6\": rpc error: code = NotFound desc = could not find container \"254033a625592ab3fe47d14d99b95630e1e619f9a1283a473549336669213cc6\": container with ID starting with 254033a625592ab3fe47d14d99b95630e1e619f9a1283a473549336669213cc6 not found: ID does not exist" Dec 02 14:33:44 crc kubenswrapper[4814]: I1202 14:33:44.158180 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 02 14:33:44 crc kubenswrapper[4814]: I1202 14:33:44.158713 4814 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 02 14:33:44 crc kubenswrapper[4814]: I1202 14:33:44.177358 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 14:33:44 crc kubenswrapper[4814]: I1202 14:33:44.177414 4814 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6d5b1135-8e41-4604-9abb-5e7cd7bcd387" Dec 02 14:33:44 crc kubenswrapper[4814]: I1202 14:33:44.184020 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 14:33:44 crc kubenswrapper[4814]: I1202 14:33:44.184095 4814 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6d5b1135-8e41-4604-9abb-5e7cd7bcd387" Dec 02 14:33:58 crc kubenswrapper[4814]: I1202 14:33:58.019471 4814 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 02 14:34:03 crc kubenswrapper[4814]: I1202 14:34:03.557665 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 02 14:34:03 crc kubenswrapper[4814]: I1202 14:34:03.561775 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 14:34:03 crc kubenswrapper[4814]: I1202 14:34:03.561895 4814 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="23e0e91e8719bccce1cbda3ddefea944a9c4cf71d13e94887955a16caddba6d2" exitCode=137 Dec 02 14:34:03 crc kubenswrapper[4814]: I1202 14:34:03.561963 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"23e0e91e8719bccce1cbda3ddefea944a9c4cf71d13e94887955a16caddba6d2"} Dec 02 14:34:03 crc kubenswrapper[4814]: I1202 14:34:03.562044 4814 scope.go:117] "RemoveContainer" containerID="4c927f0f67cc63c2ba43ab066826066c17f9d8019cfadc8511407fa97609acd4" Dec 02 14:34:04 crc kubenswrapper[4814]: I1202 14:34:04.576597 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 02 14:34:04 crc kubenswrapper[4814]: I1202 14:34:04.578589 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"af8cb0477faef5a81c1fa711ed3f6d089ddbc94fd85f3c38340748ff8317ff84"} Dec 02 14:34:06 crc kubenswrapper[4814]: I1202 14:34:06.490703 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:34:13 crc kubenswrapper[4814]: I1202 14:34:13.353601 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:34:13 crc kubenswrapper[4814]: I1202 14:34:13.361117 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:34:16 crc kubenswrapper[4814]: I1202 14:34:16.496408 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 14:34:38 crc kubenswrapper[4814]: I1202 14:34:38.530912 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6"] Dec 02 14:34:38 crc kubenswrapper[4814]: I1202 14:34:38.531924 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" podUID="588890a2-6538-4f1e-b102-24efd0c91bb8" containerName="route-controller-manager" containerID="cri-o://cc119d04bc603e4e9b8d785450039b2c01d28b5d13aea476145ce8aa07090bc2" gracePeriod=30 Dec 02 14:34:38 crc kubenswrapper[4814]: I1202 14:34:38.543594 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c2xpx"] Dec 02 14:34:38 crc kubenswrapper[4814]: I1202 14:34:38.544111 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" podUID="be857bca-e5ac-4d2f-8f62-8ab639f08d51" containerName="controller-manager" containerID="cri-o://d2404936c8cd90ac40fab08961892891847826a7252330751a9def5df79572c9" gracePeriod=30 Dec 02 14:34:38 crc kubenswrapper[4814]: I1202 14:34:38.893373 4814 generic.go:334] "Generic (PLEG): container finished" podID="588890a2-6538-4f1e-b102-24efd0c91bb8" containerID="cc119d04bc603e4e9b8d785450039b2c01d28b5d13aea476145ce8aa07090bc2" exitCode=0 Dec 02 14:34:38 crc kubenswrapper[4814]: I1202 14:34:38.893466 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" event={"ID":"588890a2-6538-4f1e-b102-24efd0c91bb8","Type":"ContainerDied","Data":"cc119d04bc603e4e9b8d785450039b2c01d28b5d13aea476145ce8aa07090bc2"} Dec 02 14:34:38 crc kubenswrapper[4814]: I1202 14:34:38.895023 4814 generic.go:334] "Generic (PLEG): container finished" podID="be857bca-e5ac-4d2f-8f62-8ab639f08d51" containerID="d2404936c8cd90ac40fab08961892891847826a7252330751a9def5df79572c9" exitCode=0 Dec 02 14:34:38 crc kubenswrapper[4814]: I1202 14:34:38.895096 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" event={"ID":"be857bca-e5ac-4d2f-8f62-8ab639f08d51","Type":"ContainerDied","Data":"d2404936c8cd90ac40fab08961892891847826a7252330751a9def5df79572c9"} Dec 02 14:34:38 crc kubenswrapper[4814]: I1202 14:34:38.973838 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:34:38 crc kubenswrapper[4814]: I1202 14:34:38.978096 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.159049 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be857bca-e5ac-4d2f-8f62-8ab639f08d51-serving-cert\") pod \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.159130 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/588890a2-6538-4f1e-b102-24efd0c91bb8-client-ca\") pod \"588890a2-6538-4f1e-b102-24efd0c91bb8\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.159189 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-client-ca\") pod \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.159225 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-config\") pod \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.159301 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-proxy-ca-bundles\") pod \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.159330 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5jzq\" (UniqueName: \"kubernetes.io/projected/588890a2-6538-4f1e-b102-24efd0c91bb8-kube-api-access-j5jzq\") pod \"588890a2-6538-4f1e-b102-24efd0c91bb8\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.159366 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9xls\" (UniqueName: \"kubernetes.io/projected/be857bca-e5ac-4d2f-8f62-8ab639f08d51-kube-api-access-n9xls\") pod \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\" (UID: \"be857bca-e5ac-4d2f-8f62-8ab639f08d51\") " Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.159410 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/588890a2-6538-4f1e-b102-24efd0c91bb8-serving-cert\") pod \"588890a2-6538-4f1e-b102-24efd0c91bb8\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.159454 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/588890a2-6538-4f1e-b102-24efd0c91bb8-config\") pod \"588890a2-6538-4f1e-b102-24efd0c91bb8\" (UID: \"588890a2-6538-4f1e-b102-24efd0c91bb8\") " Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.160203 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/588890a2-6538-4f1e-b102-24efd0c91bb8-client-ca" (OuterVolumeSpecName: "client-ca") pod "588890a2-6538-4f1e-b102-24efd0c91bb8" (UID: "588890a2-6538-4f1e-b102-24efd0c91bb8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.160261 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/588890a2-6538-4f1e-b102-24efd0c91bb8-config" (OuterVolumeSpecName: "config") pod "588890a2-6538-4f1e-b102-24efd0c91bb8" (UID: "588890a2-6538-4f1e-b102-24efd0c91bb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.160213 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-client-ca" (OuterVolumeSpecName: "client-ca") pod "be857bca-e5ac-4d2f-8f62-8ab639f08d51" (UID: "be857bca-e5ac-4d2f-8f62-8ab639f08d51"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.160388 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-config" (OuterVolumeSpecName: "config") pod "be857bca-e5ac-4d2f-8f62-8ab639f08d51" (UID: "be857bca-e5ac-4d2f-8f62-8ab639f08d51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.161023 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "be857bca-e5ac-4d2f-8f62-8ab639f08d51" (UID: "be857bca-e5ac-4d2f-8f62-8ab639f08d51"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.166558 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588890a2-6538-4f1e-b102-24efd0c91bb8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "588890a2-6538-4f1e-b102-24efd0c91bb8" (UID: "588890a2-6538-4f1e-b102-24efd0c91bb8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.166584 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be857bca-e5ac-4d2f-8f62-8ab639f08d51-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "be857bca-e5ac-4d2f-8f62-8ab639f08d51" (UID: "be857bca-e5ac-4d2f-8f62-8ab639f08d51"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.166592 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be857bca-e5ac-4d2f-8f62-8ab639f08d51-kube-api-access-n9xls" (OuterVolumeSpecName: "kube-api-access-n9xls") pod "be857bca-e5ac-4d2f-8f62-8ab639f08d51" (UID: "be857bca-e5ac-4d2f-8f62-8ab639f08d51"). InnerVolumeSpecName "kube-api-access-n9xls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.166736 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588890a2-6538-4f1e-b102-24efd0c91bb8-kube-api-access-j5jzq" (OuterVolumeSpecName: "kube-api-access-j5jzq") pod "588890a2-6538-4f1e-b102-24efd0c91bb8" (UID: "588890a2-6538-4f1e-b102-24efd0c91bb8"). InnerVolumeSpecName "kube-api-access-j5jzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.261271 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9xls\" (UniqueName: \"kubernetes.io/projected/be857bca-e5ac-4d2f-8f62-8ab639f08d51-kube-api-access-n9xls\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.261325 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/588890a2-6538-4f1e-b102-24efd0c91bb8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.261342 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/588890a2-6538-4f1e-b102-24efd0c91bb8-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.261357 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be857bca-e5ac-4d2f-8f62-8ab639f08d51-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.261368 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/588890a2-6538-4f1e-b102-24efd0c91bb8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.261380 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.261394 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.261444 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be857bca-e5ac-4d2f-8f62-8ab639f08d51-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.261458 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5jzq\" (UniqueName: \"kubernetes.io/projected/588890a2-6538-4f1e-b102-24efd0c91bb8-kube-api-access-j5jzq\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.674156 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz"] Dec 02 14:34:39 crc kubenswrapper[4814]: E1202 14:34:39.674709 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.674742 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 14:34:39 crc kubenswrapper[4814]: E1202 14:34:39.674788 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" containerName="installer" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.674810 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" containerName="installer" Dec 02 14:34:39 crc kubenswrapper[4814]: E1202 14:34:39.674840 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588890a2-6538-4f1e-b102-24efd0c91bb8" containerName="route-controller-manager" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.674853 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="588890a2-6538-4f1e-b102-24efd0c91bb8" containerName="route-controller-manager" Dec 02 14:34:39 crc kubenswrapper[4814]: E1202 14:34:39.674872 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be857bca-e5ac-4d2f-8f62-8ab639f08d51" containerName="controller-manager" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.674886 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="be857bca-e5ac-4d2f-8f62-8ab639f08d51" containerName="controller-manager" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.675059 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.675085 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="addefb6b-3fbe-4472-81cf-2e8ade2d4bfd" containerName="installer" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.675107 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="be857bca-e5ac-4d2f-8f62-8ab639f08d51" containerName="controller-manager" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.675126 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="588890a2-6538-4f1e-b102-24efd0c91bb8" containerName="route-controller-manager" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.675823 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.684567 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d7dfcd97-4mgqh"] Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.686405 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.693680 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz"] Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.702888 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d7dfcd97-4mgqh"] Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.868965 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a28aac-5c8b-4888-84b6-d6d2078e04fd-serving-cert\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.869266 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-proxy-ca-bundles\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.869394 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e569b8c-86da-4d9a-9d0a-18a11ec78604-client-ca\") pod \"route-controller-manager-75b7d5f955-5xrsz\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.869447 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e569b8c-86da-4d9a-9d0a-18a11ec78604-serving-cert\") pod \"route-controller-manager-75b7d5f955-5xrsz\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.869503 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-config\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.869605 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-client-ca\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.869737 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhp57\" (UniqueName: \"kubernetes.io/projected/3e569b8c-86da-4d9a-9d0a-18a11ec78604-kube-api-access-nhp57\") pod \"route-controller-manager-75b7d5f955-5xrsz\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.869827 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e569b8c-86da-4d9a-9d0a-18a11ec78604-config\") pod \"route-controller-manager-75b7d5f955-5xrsz\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.869861 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srctp\" (UniqueName: \"kubernetes.io/projected/38a28aac-5c8b-4888-84b6-d6d2078e04fd-kube-api-access-srctp\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.903651 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" event={"ID":"588890a2-6538-4f1e-b102-24efd0c91bb8","Type":"ContainerDied","Data":"2484fd4070f10d5db32ab5524322a9eec8619413ad2c07638148f2bcb986b3e5"} Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.903687 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.903762 4814 scope.go:117] "RemoveContainer" containerID="cc119d04bc603e4e9b8d785450039b2c01d28b5d13aea476145ce8aa07090bc2" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.905606 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" event={"ID":"be857bca-e5ac-4d2f-8f62-8ab639f08d51","Type":"ContainerDied","Data":"aaff7296d60e3e6f300437a7f5a29d32e9378e7c643840a1e05a286aa8dcc12f"} Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.905647 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c2xpx" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.922197 4814 scope.go:117] "RemoveContainer" containerID="d2404936c8cd90ac40fab08961892891847826a7252330751a9def5df79572c9" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.951469 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c2xpx"] Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.955341 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c2xpx"] Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.964876 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6"] Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.969371 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t82r6"] Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.988574 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e569b8c-86da-4d9a-9d0a-18a11ec78604-client-ca\") pod \"route-controller-manager-75b7d5f955-5xrsz\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.988616 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e569b8c-86da-4d9a-9d0a-18a11ec78604-serving-cert\") pod \"route-controller-manager-75b7d5f955-5xrsz\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.988648 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-config\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.988681 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-client-ca\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.988718 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhp57\" (UniqueName: \"kubernetes.io/projected/3e569b8c-86da-4d9a-9d0a-18a11ec78604-kube-api-access-nhp57\") pod \"route-controller-manager-75b7d5f955-5xrsz\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.988757 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e569b8c-86da-4d9a-9d0a-18a11ec78604-config\") pod \"route-controller-manager-75b7d5f955-5xrsz\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.988785 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srctp\" (UniqueName: \"kubernetes.io/projected/38a28aac-5c8b-4888-84b6-d6d2078e04fd-kube-api-access-srctp\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.988814 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a28aac-5c8b-4888-84b6-d6d2078e04fd-serving-cert\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.988841 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-proxy-ca-bundles\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.990641 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-proxy-ca-bundles\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.990942 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e569b8c-86da-4d9a-9d0a-18a11ec78604-client-ca\") pod \"route-controller-manager-75b7d5f955-5xrsz\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.991129 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-config\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.991338 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-client-ca\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.991903 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e569b8c-86da-4d9a-9d0a-18a11ec78604-config\") pod \"route-controller-manager-75b7d5f955-5xrsz\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.994823 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e569b8c-86da-4d9a-9d0a-18a11ec78604-serving-cert\") pod \"route-controller-manager-75b7d5f955-5xrsz\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:39 crc kubenswrapper[4814]: I1202 14:34:39.996905 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a28aac-5c8b-4888-84b6-d6d2078e04fd-serving-cert\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.007652 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srctp\" (UniqueName: \"kubernetes.io/projected/38a28aac-5c8b-4888-84b6-d6d2078e04fd-kube-api-access-srctp\") pod \"controller-manager-d7dfcd97-4mgqh\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.010049 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhp57\" (UniqueName: \"kubernetes.io/projected/3e569b8c-86da-4d9a-9d0a-18a11ec78604-kube-api-access-nhp57\") pod \"route-controller-manager-75b7d5f955-5xrsz\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.027397 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.042366 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.153648 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="588890a2-6538-4f1e-b102-24efd0c91bb8" path="/var/lib/kubelet/pods/588890a2-6538-4f1e-b102-24efd0c91bb8/volumes" Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.154684 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be857bca-e5ac-4d2f-8f62-8ab639f08d51" path="/var/lib/kubelet/pods/be857bca-e5ac-4d2f-8f62-8ab639f08d51/volumes" Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.241660 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz"] Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.292549 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d7dfcd97-4mgqh"] Dec 02 14:34:40 crc kubenswrapper[4814]: W1202 14:34:40.311216 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38a28aac_5c8b_4888_84b6_d6d2078e04fd.slice/crio-529ecc59b03222fd1a5775a6402e15012bfbad580ac566af776ed2ab01a8995a WatchSource:0}: Error finding container 529ecc59b03222fd1a5775a6402e15012bfbad580ac566af776ed2ab01a8995a: Status 404 returned error can't find the container with id 529ecc59b03222fd1a5775a6402e15012bfbad580ac566af776ed2ab01a8995a Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.912019 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" event={"ID":"38a28aac-5c8b-4888-84b6-d6d2078e04fd","Type":"ContainerStarted","Data":"44ca4ead8c701529acdfb66205bd502b5988c2f6f3235dc39c7a661ca3a93cdb"} Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.912264 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" event={"ID":"38a28aac-5c8b-4888-84b6-d6d2078e04fd","Type":"ContainerStarted","Data":"529ecc59b03222fd1a5775a6402e15012bfbad580ac566af776ed2ab01a8995a"} Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.912286 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.915854 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" event={"ID":"3e569b8c-86da-4d9a-9d0a-18a11ec78604","Type":"ContainerStarted","Data":"2fa77b2b971e8eee06a3899612dcdde38e286a2979cc9a804278ebb179bde0a8"} Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.915905 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" event={"ID":"3e569b8c-86da-4d9a-9d0a-18a11ec78604","Type":"ContainerStarted","Data":"a798ffef7a6edbe2f58188a6e5b0e4d2537f43a483ba12497b1a2367cc8987e2"} Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.916776 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.919432 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.957731 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" podStartSLOduration=2.957705081 podStartE2EDuration="2.957705081s" podCreationTimestamp="2025-12-02 14:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:34:40.937283476 +0000 UTC m=+343.061645680" watchObservedRunningTime="2025-12-02 14:34:40.957705081 +0000 UTC m=+343.082067285" Dec 02 14:34:40 crc kubenswrapper[4814]: I1202 14:34:40.984571 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" podStartSLOduration=2.98454137 podStartE2EDuration="2.98454137s" podCreationTimestamp="2025-12-02 14:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:34:40.960713818 +0000 UTC m=+343.085076022" watchObservedRunningTime="2025-12-02 14:34:40.98454137 +0000 UTC m=+343.108903584" Dec 02 14:34:41 crc kubenswrapper[4814]: I1202 14:34:41.175181 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:45 crc kubenswrapper[4814]: I1202 14:34:45.834179 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d7dfcd97-4mgqh"] Dec 02 14:34:45 crc kubenswrapper[4814]: I1202 14:34:45.835446 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" podUID="38a28aac-5c8b-4888-84b6-d6d2078e04fd" containerName="controller-manager" containerID="cri-o://44ca4ead8c701529acdfb66205bd502b5988c2f6f3235dc39c7a661ca3a93cdb" gracePeriod=30 Dec 02 14:34:45 crc kubenswrapper[4814]: I1202 14:34:45.855800 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz"] Dec 02 14:34:45 crc kubenswrapper[4814]: I1202 14:34:45.856043 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" podUID="3e569b8c-86da-4d9a-9d0a-18a11ec78604" containerName="route-controller-manager" containerID="cri-o://2fa77b2b971e8eee06a3899612dcdde38e286a2979cc9a804278ebb179bde0a8" gracePeriod=30 Dec 02 14:34:46 crc kubenswrapper[4814]: I1202 14:34:46.954809 4814 generic.go:334] "Generic (PLEG): container finished" podID="3e569b8c-86da-4d9a-9d0a-18a11ec78604" containerID="2fa77b2b971e8eee06a3899612dcdde38e286a2979cc9a804278ebb179bde0a8" exitCode=0 Dec 02 14:34:46 crc kubenswrapper[4814]: I1202 14:34:46.954895 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" event={"ID":"3e569b8c-86da-4d9a-9d0a-18a11ec78604","Type":"ContainerDied","Data":"2fa77b2b971e8eee06a3899612dcdde38e286a2979cc9a804278ebb179bde0a8"} Dec 02 14:34:46 crc kubenswrapper[4814]: I1202 14:34:46.959739 4814 generic.go:334] "Generic (PLEG): container finished" podID="38a28aac-5c8b-4888-84b6-d6d2078e04fd" containerID="44ca4ead8c701529acdfb66205bd502b5988c2f6f3235dc39c7a661ca3a93cdb" exitCode=0 Dec 02 14:34:46 crc kubenswrapper[4814]: I1202 14:34:46.959781 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" event={"ID":"38a28aac-5c8b-4888-84b6-d6d2078e04fd","Type":"ContainerDied","Data":"44ca4ead8c701529acdfb66205bd502b5988c2f6f3235dc39c7a661ca3a93cdb"} Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.143822 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.175663 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp"] Dec 02 14:34:47 crc kubenswrapper[4814]: E1202 14:34:47.175883 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e569b8c-86da-4d9a-9d0a-18a11ec78604" containerName="route-controller-manager" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.175895 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e569b8c-86da-4d9a-9d0a-18a11ec78604" containerName="route-controller-manager" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.176007 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e569b8c-86da-4d9a-9d0a-18a11ec78604" containerName="route-controller-manager" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.176397 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.188868 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp"] Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.201572 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e569b8c-86da-4d9a-9d0a-18a11ec78604-config\") pod \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.201657 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e569b8c-86da-4d9a-9d0a-18a11ec78604-client-ca\") pod \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.201787 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhp57\" (UniqueName: \"kubernetes.io/projected/3e569b8c-86da-4d9a-9d0a-18a11ec78604-kube-api-access-nhp57\") pod \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.201805 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e569b8c-86da-4d9a-9d0a-18a11ec78604-serving-cert\") pod \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\" (UID: \"3e569b8c-86da-4d9a-9d0a-18a11ec78604\") " Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.204055 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e569b8c-86da-4d9a-9d0a-18a11ec78604-config" (OuterVolumeSpecName: "config") pod "3e569b8c-86da-4d9a-9d0a-18a11ec78604" (UID: "3e569b8c-86da-4d9a-9d0a-18a11ec78604"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.205144 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e569b8c-86da-4d9a-9d0a-18a11ec78604-client-ca" (OuterVolumeSpecName: "client-ca") pod "3e569b8c-86da-4d9a-9d0a-18a11ec78604" (UID: "3e569b8c-86da-4d9a-9d0a-18a11ec78604"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.207830 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e569b8c-86da-4d9a-9d0a-18a11ec78604-kube-api-access-nhp57" (OuterVolumeSpecName: "kube-api-access-nhp57") pod "3e569b8c-86da-4d9a-9d0a-18a11ec78604" (UID: "3e569b8c-86da-4d9a-9d0a-18a11ec78604"). InnerVolumeSpecName "kube-api-access-nhp57". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.207933 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e569b8c-86da-4d9a-9d0a-18a11ec78604-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3e569b8c-86da-4d9a-9d0a-18a11ec78604" (UID: "3e569b8c-86da-4d9a-9d0a-18a11ec78604"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.256194 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.303348 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srctp\" (UniqueName: \"kubernetes.io/projected/38a28aac-5c8b-4888-84b6-d6d2078e04fd-kube-api-access-srctp\") pod \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.303500 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-proxy-ca-bundles\") pod \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.303612 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-config\") pod \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.303666 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-client-ca\") pod \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.303733 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a28aac-5c8b-4888-84b6-d6d2078e04fd-serving-cert\") pod \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\" (UID: \"38a28aac-5c8b-4888-84b6-d6d2078e04fd\") " Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.304093 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e01ad65-8efd-4079-94b6-596a3fb6bc99-config\") pod \"route-controller-manager-7d9d44f64b-s8lkp\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.304158 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e01ad65-8efd-4079-94b6-596a3fb6bc99-serving-cert\") pod \"route-controller-manager-7d9d44f64b-s8lkp\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.304244 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e01ad65-8efd-4079-94b6-596a3fb6bc99-client-ca\") pod \"route-controller-manager-7d9d44f64b-s8lkp\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.304538 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "38a28aac-5c8b-4888-84b6-d6d2078e04fd" (UID: "38a28aac-5c8b-4888-84b6-d6d2078e04fd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.304566 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96dcf\" (UniqueName: \"kubernetes.io/projected/5e01ad65-8efd-4079-94b6-596a3fb6bc99-kube-api-access-96dcf\") pod \"route-controller-manager-7d9d44f64b-s8lkp\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.304728 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.304750 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e569b8c-86da-4d9a-9d0a-18a11ec78604-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.304763 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhp57\" (UniqueName: \"kubernetes.io/projected/3e569b8c-86da-4d9a-9d0a-18a11ec78604-kube-api-access-nhp57\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.304774 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e569b8c-86da-4d9a-9d0a-18a11ec78604-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.304784 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e569b8c-86da-4d9a-9d0a-18a11ec78604-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.304899 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-client-ca" (OuterVolumeSpecName: "client-ca") pod "38a28aac-5c8b-4888-84b6-d6d2078e04fd" (UID: "38a28aac-5c8b-4888-84b6-d6d2078e04fd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.305981 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-config" (OuterVolumeSpecName: "config") pod "38a28aac-5c8b-4888-84b6-d6d2078e04fd" (UID: "38a28aac-5c8b-4888-84b6-d6d2078e04fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.307241 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a28aac-5c8b-4888-84b6-d6d2078e04fd-kube-api-access-srctp" (OuterVolumeSpecName: "kube-api-access-srctp") pod "38a28aac-5c8b-4888-84b6-d6d2078e04fd" (UID: "38a28aac-5c8b-4888-84b6-d6d2078e04fd"). InnerVolumeSpecName "kube-api-access-srctp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.307799 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a28aac-5c8b-4888-84b6-d6d2078e04fd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "38a28aac-5c8b-4888-84b6-d6d2078e04fd" (UID: "38a28aac-5c8b-4888-84b6-d6d2078e04fd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.405785 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96dcf\" (UniqueName: \"kubernetes.io/projected/5e01ad65-8efd-4079-94b6-596a3fb6bc99-kube-api-access-96dcf\") pod \"route-controller-manager-7d9d44f64b-s8lkp\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.405916 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e01ad65-8efd-4079-94b6-596a3fb6bc99-config\") pod \"route-controller-manager-7d9d44f64b-s8lkp\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.405941 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e01ad65-8efd-4079-94b6-596a3fb6bc99-serving-cert\") pod \"route-controller-manager-7d9d44f64b-s8lkp\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.405973 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e01ad65-8efd-4079-94b6-596a3fb6bc99-client-ca\") pod \"route-controller-manager-7d9d44f64b-s8lkp\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.406058 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srctp\" (UniqueName: \"kubernetes.io/projected/38a28aac-5c8b-4888-84b6-d6d2078e04fd-kube-api-access-srctp\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.406072 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.406215 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38a28aac-5c8b-4888-84b6-d6d2078e04fd-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.406350 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a28aac-5c8b-4888-84b6-d6d2078e04fd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.407402 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e01ad65-8efd-4079-94b6-596a3fb6bc99-client-ca\") pod \"route-controller-manager-7d9d44f64b-s8lkp\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.407607 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e01ad65-8efd-4079-94b6-596a3fb6bc99-config\") pod \"route-controller-manager-7d9d44f64b-s8lkp\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.410387 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e01ad65-8efd-4079-94b6-596a3fb6bc99-serving-cert\") pod \"route-controller-manager-7d9d44f64b-s8lkp\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.422959 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96dcf\" (UniqueName: \"kubernetes.io/projected/5e01ad65-8efd-4079-94b6-596a3fb6bc99-kube-api-access-96dcf\") pod \"route-controller-manager-7d9d44f64b-s8lkp\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.554073 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.820738 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp"] Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.970048 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" event={"ID":"38a28aac-5c8b-4888-84b6-d6d2078e04fd","Type":"ContainerDied","Data":"529ecc59b03222fd1a5775a6402e15012bfbad580ac566af776ed2ab01a8995a"} Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.970140 4814 scope.go:117] "RemoveContainer" containerID="44ca4ead8c701529acdfb66205bd502b5988c2f6f3235dc39c7a661ca3a93cdb" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.970083 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7dfcd97-4mgqh" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.977059 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" event={"ID":"5e01ad65-8efd-4079-94b6-596a3fb6bc99","Type":"ContainerStarted","Data":"2fd5f30f5d323e31ec6fb919ec0b0e3c6aa02d8c8c6add8feb0130e9cf41d1da"} Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.977411 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.980689 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" event={"ID":"3e569b8c-86da-4d9a-9d0a-18a11ec78604","Type":"ContainerDied","Data":"a798ffef7a6edbe2f58188a6e5b0e4d2537f43a483ba12497b1a2367cc8987e2"} Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.980923 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.981753 4814 patch_prober.go:28] interesting pod/route-controller-manager-7d9d44f64b-s8lkp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.981800 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" podUID="5e01ad65-8efd-4079-94b6-596a3fb6bc99" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Dec 02 14:34:47 crc kubenswrapper[4814]: I1202 14:34:47.996907 4814 scope.go:117] "RemoveContainer" containerID="2fa77b2b971e8eee06a3899612dcdde38e286a2979cc9a804278ebb179bde0a8" Dec 02 14:34:48 crc kubenswrapper[4814]: I1202 14:34:48.029414 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" podStartSLOduration=3.029383389 podStartE2EDuration="3.029383389s" podCreationTimestamp="2025-12-02 14:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:34:48.011359416 +0000 UTC m=+350.135721660" watchObservedRunningTime="2025-12-02 14:34:48.029383389 +0000 UTC m=+350.153745633" Dec 02 14:34:48 crc kubenswrapper[4814]: I1202 14:34:48.058123 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz"] Dec 02 14:34:48 crc kubenswrapper[4814]: I1202 14:34:48.061843 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75b7d5f955-5xrsz"] Dec 02 14:34:48 crc kubenswrapper[4814]: I1202 14:34:48.076324 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d7dfcd97-4mgqh"] Dec 02 14:34:48 crc kubenswrapper[4814]: I1202 14:34:48.078998 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d7dfcd97-4mgqh"] Dec 02 14:34:48 crc kubenswrapper[4814]: I1202 14:34:48.174993 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a28aac-5c8b-4888-84b6-d6d2078e04fd" path="/var/lib/kubelet/pods/38a28aac-5c8b-4888-84b6-d6d2078e04fd/volumes" Dec 02 14:34:48 crc kubenswrapper[4814]: I1202 14:34:48.176490 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e569b8c-86da-4d9a-9d0a-18a11ec78604" path="/var/lib/kubelet/pods/3e569b8c-86da-4d9a-9d0a-18a11ec78604/volumes" Dec 02 14:34:48 crc kubenswrapper[4814]: I1202 14:34:48.991609 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" event={"ID":"5e01ad65-8efd-4079-94b6-596a3fb6bc99","Type":"ContainerStarted","Data":"c8ff8d4f83ec8179c585373219b796b369a79b31d773a64c6e33d9955d929a70"} Dec 02 14:34:48 crc kubenswrapper[4814]: I1202 14:34:48.997758 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.680525 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f97b6cc9d-l9742"] Dec 02 14:34:49 crc kubenswrapper[4814]: E1202 14:34:49.681521 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a28aac-5c8b-4888-84b6-d6d2078e04fd" containerName="controller-manager" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.681553 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a28aac-5c8b-4888-84b6-d6d2078e04fd" containerName="controller-manager" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.681790 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a28aac-5c8b-4888-84b6-d6d2078e04fd" containerName="controller-manager" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.682787 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.686421 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.686865 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.687682 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.688075 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.689543 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.689998 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.700816 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.709033 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f97b6cc9d-l9742"] Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.744301 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47ebb74b-e9ca-4141-a7f2-18579529f5d0-serving-cert\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.744368 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-proxy-ca-bundles\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.744416 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scglk\" (UniqueName: \"kubernetes.io/projected/47ebb74b-e9ca-4141-a7f2-18579529f5d0-kube-api-access-scglk\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.744449 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-client-ca\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.744571 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-config\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.846474 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47ebb74b-e9ca-4141-a7f2-18579529f5d0-serving-cert\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.846550 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-proxy-ca-bundles\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.846584 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scglk\" (UniqueName: \"kubernetes.io/projected/47ebb74b-e9ca-4141-a7f2-18579529f5d0-kube-api-access-scglk\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.846617 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-client-ca\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.846659 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-config\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.847879 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-client-ca\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.848801 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-config\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.848960 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-proxy-ca-bundles\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.855056 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f97b6cc9d-l9742"] Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.857665 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47ebb74b-e9ca-4141-a7f2-18579529f5d0-serving-cert\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:49 crc kubenswrapper[4814]: E1202 14:34:49.860435 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-scglk serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" podUID="47ebb74b-e9ca-4141-a7f2-18579529f5d0" Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.867806 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp"] Dec 02 14:34:49 crc kubenswrapper[4814]: I1202 14:34:49.879013 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scglk\" (UniqueName: \"kubernetes.io/projected/47ebb74b-e9ca-4141-a7f2-18579529f5d0-kube-api-access-scglk\") pod \"controller-manager-7f97b6cc9d-l9742\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.001006 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.012808 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.049494 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-config\") pod \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.049625 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47ebb74b-e9ca-4141-a7f2-18579529f5d0-serving-cert\") pod \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.049741 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-proxy-ca-bundles\") pod \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.049866 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-client-ca\") pod \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.049925 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scglk\" (UniqueName: \"kubernetes.io/projected/47ebb74b-e9ca-4141-a7f2-18579529f5d0-kube-api-access-scglk\") pod \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\" (UID: \"47ebb74b-e9ca-4141-a7f2-18579529f5d0\") " Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.050143 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-config" (OuterVolumeSpecName: "config") pod "47ebb74b-e9ca-4141-a7f2-18579529f5d0" (UID: "47ebb74b-e9ca-4141-a7f2-18579529f5d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.050278 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-client-ca" (OuterVolumeSpecName: "client-ca") pod "47ebb74b-e9ca-4141-a7f2-18579529f5d0" (UID: "47ebb74b-e9ca-4141-a7f2-18579529f5d0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.050421 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.050440 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.050887 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "47ebb74b-e9ca-4141-a7f2-18579529f5d0" (UID: "47ebb74b-e9ca-4141-a7f2-18579529f5d0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.053876 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47ebb74b-e9ca-4141-a7f2-18579529f5d0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "47ebb74b-e9ca-4141-a7f2-18579529f5d0" (UID: "47ebb74b-e9ca-4141-a7f2-18579529f5d0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.055431 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47ebb74b-e9ca-4141-a7f2-18579529f5d0-kube-api-access-scglk" (OuterVolumeSpecName: "kube-api-access-scglk") pod "47ebb74b-e9ca-4141-a7f2-18579529f5d0" (UID: "47ebb74b-e9ca-4141-a7f2-18579529f5d0"). InnerVolumeSpecName "kube-api-access-scglk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.151552 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scglk\" (UniqueName: \"kubernetes.io/projected/47ebb74b-e9ca-4141-a7f2-18579529f5d0-kube-api-access-scglk\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.151605 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47ebb74b-e9ca-4141-a7f2-18579529f5d0-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.151623 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ebb74b-e9ca-4141-a7f2-18579529f5d0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.651093 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:34:50 crc kubenswrapper[4814]: I1202 14:34:50.651186 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.007384 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f97b6cc9d-l9742" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.008014 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" podUID="5e01ad65-8efd-4079-94b6-596a3fb6bc99" containerName="route-controller-manager" containerID="cri-o://c8ff8d4f83ec8179c585373219b796b369a79b31d773a64c6e33d9955d929a70" gracePeriod=30 Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.109832 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dfbd889b8-qst2k"] Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.111043 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.114262 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.115105 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.115742 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.130423 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f97b6cc9d-l9742"] Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.138978 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f97b6cc9d-l9742"] Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.146573 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.146631 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.148611 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.151904 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.166708 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dfbd889b8-qst2k"] Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.200807 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61efec99-c80c-4599-bf69-b034a2a33f68-serving-cert\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.200896 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-config\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.200924 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lt6x\" (UniqueName: \"kubernetes.io/projected/61efec99-c80c-4599-bf69-b034a2a33f68-kube-api-access-9lt6x\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.200973 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-client-ca\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.200997 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-proxy-ca-bundles\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.303653 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61efec99-c80c-4599-bf69-b034a2a33f68-serving-cert\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.303761 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-config\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.303803 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lt6x\" (UniqueName: \"kubernetes.io/projected/61efec99-c80c-4599-bf69-b034a2a33f68-kube-api-access-9lt6x\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.303840 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-client-ca\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.303884 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-proxy-ca-bundles\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.305700 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-client-ca\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.308296 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-config\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.309817 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-proxy-ca-bundles\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.315943 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61efec99-c80c-4599-bf69-b034a2a33f68-serving-cert\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.326197 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lt6x\" (UniqueName: \"kubernetes.io/projected/61efec99-c80c-4599-bf69-b034a2a33f68-kube-api-access-9lt6x\") pod \"controller-manager-dfbd889b8-qst2k\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.401774 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.493432 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.507787 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e01ad65-8efd-4079-94b6-596a3fb6bc99-client-ca\") pod \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.507876 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e01ad65-8efd-4079-94b6-596a3fb6bc99-serving-cert\") pod \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.507941 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e01ad65-8efd-4079-94b6-596a3fb6bc99-config\") pod \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.508107 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96dcf\" (UniqueName: \"kubernetes.io/projected/5e01ad65-8efd-4079-94b6-596a3fb6bc99-kube-api-access-96dcf\") pod \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\" (UID: \"5e01ad65-8efd-4079-94b6-596a3fb6bc99\") " Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.508842 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e01ad65-8efd-4079-94b6-596a3fb6bc99-client-ca" (OuterVolumeSpecName: "client-ca") pod "5e01ad65-8efd-4079-94b6-596a3fb6bc99" (UID: "5e01ad65-8efd-4079-94b6-596a3fb6bc99"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.509117 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e01ad65-8efd-4079-94b6-596a3fb6bc99-config" (OuterVolumeSpecName: "config") pod "5e01ad65-8efd-4079-94b6-596a3fb6bc99" (UID: "5e01ad65-8efd-4079-94b6-596a3fb6bc99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.511858 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e01ad65-8efd-4079-94b6-596a3fb6bc99-kube-api-access-96dcf" (OuterVolumeSpecName: "kube-api-access-96dcf") pod "5e01ad65-8efd-4079-94b6-596a3fb6bc99" (UID: "5e01ad65-8efd-4079-94b6-596a3fb6bc99"). InnerVolumeSpecName "kube-api-access-96dcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.512965 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e01ad65-8efd-4079-94b6-596a3fb6bc99-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5e01ad65-8efd-4079-94b6-596a3fb6bc99" (UID: "5e01ad65-8efd-4079-94b6-596a3fb6bc99"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.620474 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96dcf\" (UniqueName: \"kubernetes.io/projected/5e01ad65-8efd-4079-94b6-596a3fb6bc99-kube-api-access-96dcf\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.620535 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e01ad65-8efd-4079-94b6-596a3fb6bc99-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.620555 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e01ad65-8efd-4079-94b6-596a3fb6bc99-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.620577 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e01ad65-8efd-4079-94b6-596a3fb6bc99-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:51 crc kubenswrapper[4814]: I1202 14:34:51.926203 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dfbd889b8-qst2k"] Dec 02 14:34:51 crc kubenswrapper[4814]: W1202 14:34:51.938931 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61efec99_c80c_4599_bf69_b034a2a33f68.slice/crio-1d0af64ee26df9e025331a51df7ea58b6b91753aa55654fc85391dbcdf7c396d WatchSource:0}: Error finding container 1d0af64ee26df9e025331a51df7ea58b6b91753aa55654fc85391dbcdf7c396d: Status 404 returned error can't find the container with id 1d0af64ee26df9e025331a51df7ea58b6b91753aa55654fc85391dbcdf7c396d Dec 02 14:34:52 crc kubenswrapper[4814]: I1202 14:34:52.015043 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" event={"ID":"61efec99-c80c-4599-bf69-b034a2a33f68","Type":"ContainerStarted","Data":"1d0af64ee26df9e025331a51df7ea58b6b91753aa55654fc85391dbcdf7c396d"} Dec 02 14:34:52 crc kubenswrapper[4814]: I1202 14:34:52.017063 4814 generic.go:334] "Generic (PLEG): container finished" podID="5e01ad65-8efd-4079-94b6-596a3fb6bc99" containerID="c8ff8d4f83ec8179c585373219b796b369a79b31d773a64c6e33d9955d929a70" exitCode=0 Dec 02 14:34:52 crc kubenswrapper[4814]: I1202 14:34:52.017158 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" Dec 02 14:34:52 crc kubenswrapper[4814]: I1202 14:34:52.017154 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" event={"ID":"5e01ad65-8efd-4079-94b6-596a3fb6bc99","Type":"ContainerDied","Data":"c8ff8d4f83ec8179c585373219b796b369a79b31d773a64c6e33d9955d929a70"} Dec 02 14:34:52 crc kubenswrapper[4814]: I1202 14:34:52.017276 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp" event={"ID":"5e01ad65-8efd-4079-94b6-596a3fb6bc99","Type":"ContainerDied","Data":"2fd5f30f5d323e31ec6fb919ec0b0e3c6aa02d8c8c6add8feb0130e9cf41d1da"} Dec 02 14:34:52 crc kubenswrapper[4814]: I1202 14:34:52.017302 4814 scope.go:117] "RemoveContainer" containerID="c8ff8d4f83ec8179c585373219b796b369a79b31d773a64c6e33d9955d929a70" Dec 02 14:34:52 crc kubenswrapper[4814]: I1202 14:34:52.031832 4814 scope.go:117] "RemoveContainer" containerID="c8ff8d4f83ec8179c585373219b796b369a79b31d773a64c6e33d9955d929a70" Dec 02 14:34:52 crc kubenswrapper[4814]: E1202 14:34:52.032269 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8ff8d4f83ec8179c585373219b796b369a79b31d773a64c6e33d9955d929a70\": container with ID starting with c8ff8d4f83ec8179c585373219b796b369a79b31d773a64c6e33d9955d929a70 not found: ID does not exist" containerID="c8ff8d4f83ec8179c585373219b796b369a79b31d773a64c6e33d9955d929a70" Dec 02 14:34:52 crc kubenswrapper[4814]: I1202 14:34:52.032330 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ff8d4f83ec8179c585373219b796b369a79b31d773a64c6e33d9955d929a70"} err="failed to get container status \"c8ff8d4f83ec8179c585373219b796b369a79b31d773a64c6e33d9955d929a70\": rpc error: code = NotFound desc = could not find container \"c8ff8d4f83ec8179c585373219b796b369a79b31d773a64c6e33d9955d929a70\": container with ID starting with c8ff8d4f83ec8179c585373219b796b369a79b31d773a64c6e33d9955d929a70 not found: ID does not exist" Dec 02 14:34:52 crc kubenswrapper[4814]: I1202 14:34:52.059563 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp"] Dec 02 14:34:52 crc kubenswrapper[4814]: I1202 14:34:52.064037 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9d44f64b-s8lkp"] Dec 02 14:34:52 crc kubenswrapper[4814]: I1202 14:34:52.155216 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47ebb74b-e9ca-4141-a7f2-18579529f5d0" path="/var/lib/kubelet/pods/47ebb74b-e9ca-4141-a7f2-18579529f5d0/volumes" Dec 02 14:34:52 crc kubenswrapper[4814]: I1202 14:34:52.155856 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e01ad65-8efd-4079-94b6-596a3fb6bc99" path="/var/lib/kubelet/pods/5e01ad65-8efd-4079-94b6-596a3fb6bc99/volumes" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.027655 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" event={"ID":"61efec99-c80c-4599-bf69-b034a2a33f68","Type":"ContainerStarted","Data":"de581ff221cb785aee324ff2ccb8862dd5494d71b60b596d66ca0e2385c9d060"} Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.028211 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.036070 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.050900 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" podStartSLOduration=4.050876625 podStartE2EDuration="4.050876625s" podCreationTimestamp="2025-12-02 14:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:34:53.046572874 +0000 UTC m=+355.170935088" watchObservedRunningTime="2025-12-02 14:34:53.050876625 +0000 UTC m=+355.175238829" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.675509 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n"] Dec 02 14:34:53 crc kubenswrapper[4814]: E1202 14:34:53.676031 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e01ad65-8efd-4079-94b6-596a3fb6bc99" containerName="route-controller-manager" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.676044 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e01ad65-8efd-4079-94b6-596a3fb6bc99" containerName="route-controller-manager" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.676149 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e01ad65-8efd-4079-94b6-596a3fb6bc99" containerName="route-controller-manager" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.676556 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.678987 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.679169 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.679293 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.679422 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.679488 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.679586 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.694038 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n"] Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.849323 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/341a463d-106d-441f-b3a3-5da1b37e34bf-config\") pod \"route-controller-manager-685f5864cd-7695n\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.849391 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/341a463d-106d-441f-b3a3-5da1b37e34bf-serving-cert\") pod \"route-controller-manager-685f5864cd-7695n\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.849621 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfzh5\" (UniqueName: \"kubernetes.io/projected/341a463d-106d-441f-b3a3-5da1b37e34bf-kube-api-access-xfzh5\") pod \"route-controller-manager-685f5864cd-7695n\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.849731 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/341a463d-106d-441f-b3a3-5da1b37e34bf-client-ca\") pod \"route-controller-manager-685f5864cd-7695n\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.950678 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/341a463d-106d-441f-b3a3-5da1b37e34bf-config\") pod \"route-controller-manager-685f5864cd-7695n\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.950740 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/341a463d-106d-441f-b3a3-5da1b37e34bf-serving-cert\") pod \"route-controller-manager-685f5864cd-7695n\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.950767 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfzh5\" (UniqueName: \"kubernetes.io/projected/341a463d-106d-441f-b3a3-5da1b37e34bf-kube-api-access-xfzh5\") pod \"route-controller-manager-685f5864cd-7695n\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.950796 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/341a463d-106d-441f-b3a3-5da1b37e34bf-client-ca\") pod \"route-controller-manager-685f5864cd-7695n\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.951731 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/341a463d-106d-441f-b3a3-5da1b37e34bf-client-ca\") pod \"route-controller-manager-685f5864cd-7695n\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.952034 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/341a463d-106d-441f-b3a3-5da1b37e34bf-config\") pod \"route-controller-manager-685f5864cd-7695n\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.956867 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/341a463d-106d-441f-b3a3-5da1b37e34bf-serving-cert\") pod \"route-controller-manager-685f5864cd-7695n\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.967943 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfzh5\" (UniqueName: \"kubernetes.io/projected/341a463d-106d-441f-b3a3-5da1b37e34bf-kube-api-access-xfzh5\") pod \"route-controller-manager-685f5864cd-7695n\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:53 crc kubenswrapper[4814]: I1202 14:34:53.992101 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:54 crc kubenswrapper[4814]: I1202 14:34:54.250172 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n"] Dec 02 14:34:55 crc kubenswrapper[4814]: I1202 14:34:55.061659 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" event={"ID":"341a463d-106d-441f-b3a3-5da1b37e34bf","Type":"ContainerStarted","Data":"e9693f43118b18021c441eee56eb4af9a9e65b06dc9640aef56efab6da4677d0"} Dec 02 14:34:55 crc kubenswrapper[4814]: I1202 14:34:55.061721 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" event={"ID":"341a463d-106d-441f-b3a3-5da1b37e34bf","Type":"ContainerStarted","Data":"da611c1c2563cf9dcd0ccc550a09a77ff50f5dc52ce9a8d2e41271597fd7a5af"} Dec 02 14:34:55 crc kubenswrapper[4814]: I1202 14:34:55.062642 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:55 crc kubenswrapper[4814]: I1202 14:34:55.085149 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" podStartSLOduration=6.085119787 podStartE2EDuration="6.085119787s" podCreationTimestamp="2025-12-02 14:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:34:55.081408102 +0000 UTC m=+357.205770346" watchObservedRunningTime="2025-12-02 14:34:55.085119787 +0000 UTC m=+357.209482001" Dec 02 14:34:55 crc kubenswrapper[4814]: I1202 14:34:55.120256 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:58 crc kubenswrapper[4814]: I1202 14:34:58.491343 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dfbd889b8-qst2k"] Dec 02 14:34:58 crc kubenswrapper[4814]: I1202 14:34:58.492301 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" podUID="61efec99-c80c-4599-bf69-b034a2a33f68" containerName="controller-manager" containerID="cri-o://de581ff221cb785aee324ff2ccb8862dd5494d71b60b596d66ca0e2385c9d060" gracePeriod=30 Dec 02 14:34:58 crc kubenswrapper[4814]: I1202 14:34:58.508973 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n"] Dec 02 14:34:58 crc kubenswrapper[4814]: I1202 14:34:58.509290 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" podUID="341a463d-106d-441f-b3a3-5da1b37e34bf" containerName="route-controller-manager" containerID="cri-o://e9693f43118b18021c441eee56eb4af9a9e65b06dc9640aef56efab6da4677d0" gracePeriod=30 Dec 02 14:34:58 crc kubenswrapper[4814]: I1202 14:34:58.930126 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:58 crc kubenswrapper[4814]: I1202 14:34:58.969976 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.020993 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-client-ca\") pod \"61efec99-c80c-4599-bf69-b034a2a33f68\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.021064 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lt6x\" (UniqueName: \"kubernetes.io/projected/61efec99-c80c-4599-bf69-b034a2a33f68-kube-api-access-9lt6x\") pod \"61efec99-c80c-4599-bf69-b034a2a33f68\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.021163 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61efec99-c80c-4599-bf69-b034a2a33f68-serving-cert\") pod \"61efec99-c80c-4599-bf69-b034a2a33f68\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.021206 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-config\") pod \"61efec99-c80c-4599-bf69-b034a2a33f68\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.021292 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-proxy-ca-bundles\") pod \"61efec99-c80c-4599-bf69-b034a2a33f68\" (UID: \"61efec99-c80c-4599-bf69-b034a2a33f68\") " Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.021488 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-client-ca" (OuterVolumeSpecName: "client-ca") pod "61efec99-c80c-4599-bf69-b034a2a33f68" (UID: "61efec99-c80c-4599-bf69-b034a2a33f68"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.022022 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.022084 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "61efec99-c80c-4599-bf69-b034a2a33f68" (UID: "61efec99-c80c-4599-bf69-b034a2a33f68"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.022359 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-config" (OuterVolumeSpecName: "config") pod "61efec99-c80c-4599-bf69-b034a2a33f68" (UID: "61efec99-c80c-4599-bf69-b034a2a33f68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.026874 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61efec99-c80c-4599-bf69-b034a2a33f68-kube-api-access-9lt6x" (OuterVolumeSpecName: "kube-api-access-9lt6x") pod "61efec99-c80c-4599-bf69-b034a2a33f68" (UID: "61efec99-c80c-4599-bf69-b034a2a33f68"). InnerVolumeSpecName "kube-api-access-9lt6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.028880 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61efec99-c80c-4599-bf69-b034a2a33f68-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "61efec99-c80c-4599-bf69-b034a2a33f68" (UID: "61efec99-c80c-4599-bf69-b034a2a33f68"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.086760 4814 generic.go:334] "Generic (PLEG): container finished" podID="341a463d-106d-441f-b3a3-5da1b37e34bf" containerID="e9693f43118b18021c441eee56eb4af9a9e65b06dc9640aef56efab6da4677d0" exitCode=0 Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.086842 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.086846 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" event={"ID":"341a463d-106d-441f-b3a3-5da1b37e34bf","Type":"ContainerDied","Data":"e9693f43118b18021c441eee56eb4af9a9e65b06dc9640aef56efab6da4677d0"} Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.087007 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n" event={"ID":"341a463d-106d-441f-b3a3-5da1b37e34bf","Type":"ContainerDied","Data":"da611c1c2563cf9dcd0ccc550a09a77ff50f5dc52ce9a8d2e41271597fd7a5af"} Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.087055 4814 scope.go:117] "RemoveContainer" containerID="e9693f43118b18021c441eee56eb4af9a9e65b06dc9640aef56efab6da4677d0" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.088528 4814 generic.go:334] "Generic (PLEG): container finished" podID="61efec99-c80c-4599-bf69-b034a2a33f68" containerID="de581ff221cb785aee324ff2ccb8862dd5494d71b60b596d66ca0e2385c9d060" exitCode=0 Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.088584 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.088577 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" event={"ID":"61efec99-c80c-4599-bf69-b034a2a33f68","Type":"ContainerDied","Data":"de581ff221cb785aee324ff2ccb8862dd5494d71b60b596d66ca0e2385c9d060"} Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.088697 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfbd889b8-qst2k" event={"ID":"61efec99-c80c-4599-bf69-b034a2a33f68","Type":"ContainerDied","Data":"1d0af64ee26df9e025331a51df7ea58b6b91753aa55654fc85391dbcdf7c396d"} Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.106963 4814 scope.go:117] "RemoveContainer" containerID="e9693f43118b18021c441eee56eb4af9a9e65b06dc9640aef56efab6da4677d0" Dec 02 14:34:59 crc kubenswrapper[4814]: E1202 14:34:59.107389 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9693f43118b18021c441eee56eb4af9a9e65b06dc9640aef56efab6da4677d0\": container with ID starting with e9693f43118b18021c441eee56eb4af9a9e65b06dc9640aef56efab6da4677d0 not found: ID does not exist" containerID="e9693f43118b18021c441eee56eb4af9a9e65b06dc9640aef56efab6da4677d0" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.107437 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9693f43118b18021c441eee56eb4af9a9e65b06dc9640aef56efab6da4677d0"} err="failed to get container status \"e9693f43118b18021c441eee56eb4af9a9e65b06dc9640aef56efab6da4677d0\": rpc error: code = NotFound desc = could not find container \"e9693f43118b18021c441eee56eb4af9a9e65b06dc9640aef56efab6da4677d0\": container with ID starting with e9693f43118b18021c441eee56eb4af9a9e65b06dc9640aef56efab6da4677d0 not found: ID does not exist" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.107474 4814 scope.go:117] "RemoveContainer" containerID="de581ff221cb785aee324ff2ccb8862dd5494d71b60b596d66ca0e2385c9d060" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.120149 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dfbd889b8-qst2k"] Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.123249 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/341a463d-106d-441f-b3a3-5da1b37e34bf-serving-cert\") pod \"341a463d-106d-441f-b3a3-5da1b37e34bf\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.123296 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfzh5\" (UniqueName: \"kubernetes.io/projected/341a463d-106d-441f-b3a3-5da1b37e34bf-kube-api-access-xfzh5\") pod \"341a463d-106d-441f-b3a3-5da1b37e34bf\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.123379 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/341a463d-106d-441f-b3a3-5da1b37e34bf-config\") pod \"341a463d-106d-441f-b3a3-5da1b37e34bf\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.123471 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/341a463d-106d-441f-b3a3-5da1b37e34bf-client-ca\") pod \"341a463d-106d-441f-b3a3-5da1b37e34bf\" (UID: \"341a463d-106d-441f-b3a3-5da1b37e34bf\") " Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.123701 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.123717 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61efec99-c80c-4599-bf69-b034a2a33f68-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.123727 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lt6x\" (UniqueName: \"kubernetes.io/projected/61efec99-c80c-4599-bf69-b034a2a33f68-kube-api-access-9lt6x\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.123736 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61efec99-c80c-4599-bf69-b034a2a33f68-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.124659 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/341a463d-106d-441f-b3a3-5da1b37e34bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "341a463d-106d-441f-b3a3-5da1b37e34bf" (UID: "341a463d-106d-441f-b3a3-5da1b37e34bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.126510 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/341a463d-106d-441f-b3a3-5da1b37e34bf-config" (OuterVolumeSpecName: "config") pod "341a463d-106d-441f-b3a3-5da1b37e34bf" (UID: "341a463d-106d-441f-b3a3-5da1b37e34bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.127444 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/341a463d-106d-441f-b3a3-5da1b37e34bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "341a463d-106d-441f-b3a3-5da1b37e34bf" (UID: "341a463d-106d-441f-b3a3-5da1b37e34bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.128160 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/341a463d-106d-441f-b3a3-5da1b37e34bf-kube-api-access-xfzh5" (OuterVolumeSpecName: "kube-api-access-xfzh5") pod "341a463d-106d-441f-b3a3-5da1b37e34bf" (UID: "341a463d-106d-441f-b3a3-5da1b37e34bf"). InnerVolumeSpecName "kube-api-access-xfzh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.129190 4814 scope.go:117] "RemoveContainer" containerID="de581ff221cb785aee324ff2ccb8862dd5494d71b60b596d66ca0e2385c9d060" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.129639 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-dfbd889b8-qst2k"] Dec 02 14:34:59 crc kubenswrapper[4814]: E1202 14:34:59.129857 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de581ff221cb785aee324ff2ccb8862dd5494d71b60b596d66ca0e2385c9d060\": container with ID starting with de581ff221cb785aee324ff2ccb8862dd5494d71b60b596d66ca0e2385c9d060 not found: ID does not exist" containerID="de581ff221cb785aee324ff2ccb8862dd5494d71b60b596d66ca0e2385c9d060" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.129904 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de581ff221cb785aee324ff2ccb8862dd5494d71b60b596d66ca0e2385c9d060"} err="failed to get container status \"de581ff221cb785aee324ff2ccb8862dd5494d71b60b596d66ca0e2385c9d060\": rpc error: code = NotFound desc = could not find container \"de581ff221cb785aee324ff2ccb8862dd5494d71b60b596d66ca0e2385c9d060\": container with ID starting with de581ff221cb785aee324ff2ccb8862dd5494d71b60b596d66ca0e2385c9d060 not found: ID does not exist" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.226206 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/341a463d-106d-441f-b3a3-5da1b37e34bf-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.226744 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/341a463d-106d-441f-b3a3-5da1b37e34bf-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.226775 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfzh5\" (UniqueName: \"kubernetes.io/projected/341a463d-106d-441f-b3a3-5da1b37e34bf-kube-api-access-xfzh5\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.226788 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/341a463d-106d-441f-b3a3-5da1b37e34bf-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.428355 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n"] Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.432218 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685f5864cd-7695n"] Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.687496 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch"] Dec 02 14:34:59 crc kubenswrapper[4814]: E1202 14:34:59.689524 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61efec99-c80c-4599-bf69-b034a2a33f68" containerName="controller-manager" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.689607 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="61efec99-c80c-4599-bf69-b034a2a33f68" containerName="controller-manager" Dec 02 14:34:59 crc kubenswrapper[4814]: E1202 14:34:59.689678 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341a463d-106d-441f-b3a3-5da1b37e34bf" containerName="route-controller-manager" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.689736 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="341a463d-106d-441f-b3a3-5da1b37e34bf" containerName="route-controller-manager" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.689919 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="61efec99-c80c-4599-bf69-b034a2a33f68" containerName="controller-manager" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.689995 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="341a463d-106d-441f-b3a3-5da1b37e34bf" containerName="route-controller-manager" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.690601 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.693846 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.694659 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx"] Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.695650 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.699654 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.699725 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.699921 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.700373 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.701638 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.701715 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch"] Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.702164 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.702198 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.702811 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.702817 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.702951 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.707195 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx"] Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.709647 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.710627 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.834988 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qghnt\" (UniqueName: \"kubernetes.io/projected/3d6f33a1-d3a5-45bf-b37f-955f3b8b4538-kube-api-access-qghnt\") pod \"route-controller-manager-7d9d44f64b-nmxzx\" (UID: \"3d6f33a1-d3a5-45bf-b37f-955f3b8b4538\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.835066 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d6f33a1-d3a5-45bf-b37f-955f3b8b4538-client-ca\") pod \"route-controller-manager-7d9d44f64b-nmxzx\" (UID: \"3d6f33a1-d3a5-45bf-b37f-955f3b8b4538\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.835197 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-serving-cert\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.835231 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6f33a1-d3a5-45bf-b37f-955f3b8b4538-config\") pod \"route-controller-manager-7d9d44f64b-nmxzx\" (UID: \"3d6f33a1-d3a5-45bf-b37f-955f3b8b4538\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.835277 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-config\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.835537 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-proxy-ca-bundles\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.835643 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6f33a1-d3a5-45bf-b37f-955f3b8b4538-serving-cert\") pod \"route-controller-manager-7d9d44f64b-nmxzx\" (UID: \"3d6f33a1-d3a5-45bf-b37f-955f3b8b4538\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.835700 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sqwt\" (UniqueName: \"kubernetes.io/projected/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-kube-api-access-2sqwt\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.835763 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-client-ca\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.937450 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-config\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.937520 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-proxy-ca-bundles\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.937558 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6f33a1-d3a5-45bf-b37f-955f3b8b4538-serving-cert\") pod \"route-controller-manager-7d9d44f64b-nmxzx\" (UID: \"3d6f33a1-d3a5-45bf-b37f-955f3b8b4538\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.937579 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sqwt\" (UniqueName: \"kubernetes.io/projected/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-kube-api-access-2sqwt\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.937598 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-client-ca\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.937653 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qghnt\" (UniqueName: \"kubernetes.io/projected/3d6f33a1-d3a5-45bf-b37f-955f3b8b4538-kube-api-access-qghnt\") pod \"route-controller-manager-7d9d44f64b-nmxzx\" (UID: \"3d6f33a1-d3a5-45bf-b37f-955f3b8b4538\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.937671 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d6f33a1-d3a5-45bf-b37f-955f3b8b4538-client-ca\") pod \"route-controller-manager-7d9d44f64b-nmxzx\" (UID: \"3d6f33a1-d3a5-45bf-b37f-955f3b8b4538\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.937711 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-serving-cert\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.937738 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6f33a1-d3a5-45bf-b37f-955f3b8b4538-config\") pod \"route-controller-manager-7d9d44f64b-nmxzx\" (UID: \"3d6f33a1-d3a5-45bf-b37f-955f3b8b4538\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.938921 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-client-ca\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.939120 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-proxy-ca-bundles\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.939200 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6f33a1-d3a5-45bf-b37f-955f3b8b4538-config\") pod \"route-controller-manager-7d9d44f64b-nmxzx\" (UID: \"3d6f33a1-d3a5-45bf-b37f-955f3b8b4538\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.939640 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d6f33a1-d3a5-45bf-b37f-955f3b8b4538-client-ca\") pod \"route-controller-manager-7d9d44f64b-nmxzx\" (UID: \"3d6f33a1-d3a5-45bf-b37f-955f3b8b4538\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.939882 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-config\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.955357 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6f33a1-d3a5-45bf-b37f-955f3b8b4538-serving-cert\") pod \"route-controller-manager-7d9d44f64b-nmxzx\" (UID: \"3d6f33a1-d3a5-45bf-b37f-955f3b8b4538\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.955356 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-serving-cert\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.960125 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qghnt\" (UniqueName: \"kubernetes.io/projected/3d6f33a1-d3a5-45bf-b37f-955f3b8b4538-kube-api-access-qghnt\") pod \"route-controller-manager-7d9d44f64b-nmxzx\" (UID: \"3d6f33a1-d3a5-45bf-b37f-955f3b8b4538\") " pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:34:59 crc kubenswrapper[4814]: I1202 14:34:59.960594 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sqwt\" (UniqueName: \"kubernetes.io/projected/5e79d40f-a26d-4f6f-86c6-c12f3eb37e81-kube-api-access-2sqwt\") pod \"controller-manager-7f97b6cc9d-cs5ch\" (UID: \"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81\") " pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:35:00 crc kubenswrapper[4814]: I1202 14:35:00.015314 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:35:00 crc kubenswrapper[4814]: I1202 14:35:00.034467 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:35:00 crc kubenswrapper[4814]: I1202 14:35:00.160170 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="341a463d-106d-441f-b3a3-5da1b37e34bf" path="/var/lib/kubelet/pods/341a463d-106d-441f-b3a3-5da1b37e34bf/volumes" Dec 02 14:35:00 crc kubenswrapper[4814]: I1202 14:35:00.161667 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61efec99-c80c-4599-bf69-b034a2a33f68" path="/var/lib/kubelet/pods/61efec99-c80c-4599-bf69-b034a2a33f68/volumes" Dec 02 14:35:00 crc kubenswrapper[4814]: I1202 14:35:00.376022 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx"] Dec 02 14:35:00 crc kubenswrapper[4814]: W1202 14:35:00.383445 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d6f33a1_d3a5_45bf_b37f_955f3b8b4538.slice/crio-28e82e9d9c044fcd0ebec246f7ab02fadd0d5b2597bed5fc4f961cc7415d0759 WatchSource:0}: Error finding container 28e82e9d9c044fcd0ebec246f7ab02fadd0d5b2597bed5fc4f961cc7415d0759: Status 404 returned error can't find the container with id 28e82e9d9c044fcd0ebec246f7ab02fadd0d5b2597bed5fc4f961cc7415d0759 Dec 02 14:35:00 crc kubenswrapper[4814]: I1202 14:35:00.482489 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch"] Dec 02 14:35:00 crc kubenswrapper[4814]: W1202 14:35:00.497137 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e79d40f_a26d_4f6f_86c6_c12f3eb37e81.slice/crio-291b8042a5cc77c816a405908b9cf432eb6adec5ce713a8d8aecdf7a7314a12e WatchSource:0}: Error finding container 291b8042a5cc77c816a405908b9cf432eb6adec5ce713a8d8aecdf7a7314a12e: Status 404 returned error can't find the container with id 291b8042a5cc77c816a405908b9cf432eb6adec5ce713a8d8aecdf7a7314a12e Dec 02 14:35:01 crc kubenswrapper[4814]: I1202 14:35:01.103891 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" event={"ID":"3d6f33a1-d3a5-45bf-b37f-955f3b8b4538","Type":"ContainerStarted","Data":"3b937fcea1193a0be7c53bb470b632e857960ef53f2f7d1b5a654fbfa013a9c3"} Dec 02 14:35:01 crc kubenswrapper[4814]: I1202 14:35:01.104493 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" event={"ID":"3d6f33a1-d3a5-45bf-b37f-955f3b8b4538","Type":"ContainerStarted","Data":"28e82e9d9c044fcd0ebec246f7ab02fadd0d5b2597bed5fc4f961cc7415d0759"} Dec 02 14:35:01 crc kubenswrapper[4814]: I1202 14:35:01.104516 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:35:01 crc kubenswrapper[4814]: I1202 14:35:01.106054 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" event={"ID":"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81","Type":"ContainerStarted","Data":"fca2aeb30d42c17dbf7892ddb0bd437315427bdadf7f9355a71003aea6642f4e"} Dec 02 14:35:01 crc kubenswrapper[4814]: I1202 14:35:01.106127 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" event={"ID":"5e79d40f-a26d-4f6f-86c6-c12f3eb37e81","Type":"ContainerStarted","Data":"291b8042a5cc77c816a405908b9cf432eb6adec5ce713a8d8aecdf7a7314a12e"} Dec 02 14:35:01 crc kubenswrapper[4814]: I1202 14:35:01.106327 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:35:01 crc kubenswrapper[4814]: I1202 14:35:01.110928 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" Dec 02 14:35:01 crc kubenswrapper[4814]: I1202 14:35:01.111242 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" Dec 02 14:35:01 crc kubenswrapper[4814]: I1202 14:35:01.134546 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d9d44f64b-nmxzx" podStartSLOduration=3.134520076 podStartE2EDuration="3.134520076s" podCreationTimestamp="2025-12-02 14:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:35:01.129565149 +0000 UTC m=+363.253927363" watchObservedRunningTime="2025-12-02 14:35:01.134520076 +0000 UTC m=+363.258882280" Dec 02 14:35:01 crc kubenswrapper[4814]: I1202 14:35:01.148404 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f97b6cc9d-cs5ch" podStartSLOduration=3.148388523 podStartE2EDuration="3.148388523s" podCreationTimestamp="2025-12-02 14:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:35:01.147225293 +0000 UTC m=+363.271587487" watchObservedRunningTime="2025-12-02 14:35:01.148388523 +0000 UTC m=+363.272750727" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.292566 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qt9v4"] Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.294511 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.313358 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qt9v4"] Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.426275 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/27a7d778-a029-4704-9ae2-90b8a5ac21a4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.426593 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/27a7d778-a029-4704-9ae2-90b8a5ac21a4-registry-tls\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.427126 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/27a7d778-a029-4704-9ae2-90b8a5ac21a4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.427200 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.427272 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/27a7d778-a029-4704-9ae2-90b8a5ac21a4-registry-certificates\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.427299 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27a7d778-a029-4704-9ae2-90b8a5ac21a4-bound-sa-token\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.427325 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27a7d778-a029-4704-9ae2-90b8a5ac21a4-trusted-ca\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.427349 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf8ld\" (UniqueName: \"kubernetes.io/projected/27a7d778-a029-4704-9ae2-90b8a5ac21a4-kube-api-access-sf8ld\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.449843 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.529324 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/27a7d778-a029-4704-9ae2-90b8a5ac21a4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.529665 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/27a7d778-a029-4704-9ae2-90b8a5ac21a4-registry-tls\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.529811 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/27a7d778-a029-4704-9ae2-90b8a5ac21a4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.529896 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/27a7d778-a029-4704-9ae2-90b8a5ac21a4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.529997 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/27a7d778-a029-4704-9ae2-90b8a5ac21a4-registry-certificates\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.530073 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27a7d778-a029-4704-9ae2-90b8a5ac21a4-bound-sa-token\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.530154 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27a7d778-a029-4704-9ae2-90b8a5ac21a4-trusted-ca\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.530282 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf8ld\" (UniqueName: \"kubernetes.io/projected/27a7d778-a029-4704-9ae2-90b8a5ac21a4-kube-api-access-sf8ld\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.531123 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/27a7d778-a029-4704-9ae2-90b8a5ac21a4-registry-certificates\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.532484 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27a7d778-a029-4704-9ae2-90b8a5ac21a4-trusted-ca\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.539428 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/27a7d778-a029-4704-9ae2-90b8a5ac21a4-registry-tls\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.540589 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/27a7d778-a029-4704-9ae2-90b8a5ac21a4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.547734 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27a7d778-a029-4704-9ae2-90b8a5ac21a4-bound-sa-token\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.550158 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf8ld\" (UniqueName: \"kubernetes.io/projected/27a7d778-a029-4704-9ae2-90b8a5ac21a4-kube-api-access-sf8ld\") pod \"image-registry-66df7c8f76-qt9v4\" (UID: \"27a7d778-a029-4704-9ae2-90b8a5ac21a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:06 crc kubenswrapper[4814]: I1202 14:35:06.618883 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:07 crc kubenswrapper[4814]: I1202 14:35:07.118213 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qt9v4"] Dec 02 14:35:07 crc kubenswrapper[4814]: I1202 14:35:07.144271 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" event={"ID":"27a7d778-a029-4704-9ae2-90b8a5ac21a4","Type":"ContainerStarted","Data":"f8cc2a8b50a1d1cea673010d54944929ba65460f8457875fc27b0e18a9645dd2"} Dec 02 14:35:08 crc kubenswrapper[4814]: I1202 14:35:08.154046 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:08 crc kubenswrapper[4814]: I1202 14:35:08.154570 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" event={"ID":"27a7d778-a029-4704-9ae2-90b8a5ac21a4","Type":"ContainerStarted","Data":"774b3ea56d8bd0d76c99f85b2ebc62adcabeac5b624bd127276735f4bea8e2c9"} Dec 02 14:35:08 crc kubenswrapper[4814]: I1202 14:35:08.201057 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" podStartSLOduration=2.201034131 podStartE2EDuration="2.201034131s" podCreationTimestamp="2025-12-02 14:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:35:08.196824933 +0000 UTC m=+370.321187147" watchObservedRunningTime="2025-12-02 14:35:08.201034131 +0000 UTC m=+370.325396345" Dec 02 14:35:20 crc kubenswrapper[4814]: I1202 14:35:20.651130 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:35:20 crc kubenswrapper[4814]: I1202 14:35:20.651957 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:35:26 crc kubenswrapper[4814]: I1202 14:35:26.628751 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qt9v4" Dec 02 14:35:26 crc kubenswrapper[4814]: I1202 14:35:26.726534 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hvzjs"] Dec 02 14:35:50 crc kubenswrapper[4814]: I1202 14:35:50.651529 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:35:50 crc kubenswrapper[4814]: I1202 14:35:50.652154 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:35:50 crc kubenswrapper[4814]: I1202 14:35:50.652217 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:35:50 crc kubenswrapper[4814]: I1202 14:35:50.653044 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d6d34edeed3132fe16646589dc9a6700a81ed66231cd064d17aa7b5c701ae63"} pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:35:50 crc kubenswrapper[4814]: I1202 14:35:50.653112 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" containerID="cri-o://4d6d34edeed3132fe16646589dc9a6700a81ed66231cd064d17aa7b5c701ae63" gracePeriod=600 Dec 02 14:35:51 crc kubenswrapper[4814]: I1202 14:35:51.450710 4814 generic.go:334] "Generic (PLEG): container finished" podID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerID="4d6d34edeed3132fe16646589dc9a6700a81ed66231cd064d17aa7b5c701ae63" exitCode=0 Dec 02 14:35:51 crc kubenswrapper[4814]: I1202 14:35:51.450828 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerDied","Data":"4d6d34edeed3132fe16646589dc9a6700a81ed66231cd064d17aa7b5c701ae63"} Dec 02 14:35:51 crc kubenswrapper[4814]: I1202 14:35:51.451216 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"2845690d7fa975a6fc9bee687fba5b5a81fdaa565ae0b556f8b2ea0b2d8f3b40"} Dec 02 14:35:51 crc kubenswrapper[4814]: I1202 14:35:51.451314 4814 scope.go:117] "RemoveContainer" containerID="0f853ad34106774117f7a5940122dcffd436047441229d1b49e43444c075aea2" Dec 02 14:35:51 crc kubenswrapper[4814]: I1202 14:35:51.766796 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" podUID="6a507d1f-367a-4ad8-b436-592c077be1f1" containerName="registry" containerID="cri-o://ca077ed28836578350bdfea51d1546b67a7125f0f485d8dc6885f5065612fc0c" gracePeriod=30 Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.231348 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.307964 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6a507d1f-367a-4ad8-b436-592c077be1f1-registry-certificates\") pod \"6a507d1f-367a-4ad8-b436-592c077be1f1\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.308064 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5fcn\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-kube-api-access-w5fcn\") pod \"6a507d1f-367a-4ad8-b436-592c077be1f1\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.308133 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6a507d1f-367a-4ad8-b436-592c077be1f1-installation-pull-secrets\") pod \"6a507d1f-367a-4ad8-b436-592c077be1f1\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.308186 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-registry-tls\") pod \"6a507d1f-367a-4ad8-b436-592c077be1f1\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.308275 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6a507d1f-367a-4ad8-b436-592c077be1f1-ca-trust-extracted\") pod \"6a507d1f-367a-4ad8-b436-592c077be1f1\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.308321 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a507d1f-367a-4ad8-b436-592c077be1f1-trusted-ca\") pod \"6a507d1f-367a-4ad8-b436-592c077be1f1\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.308360 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-bound-sa-token\") pod \"6a507d1f-367a-4ad8-b436-592c077be1f1\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.308526 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"6a507d1f-367a-4ad8-b436-592c077be1f1\" (UID: \"6a507d1f-367a-4ad8-b436-592c077be1f1\") " Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.310061 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a507d1f-367a-4ad8-b436-592c077be1f1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "6a507d1f-367a-4ad8-b436-592c077be1f1" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.310076 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a507d1f-367a-4ad8-b436-592c077be1f1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "6a507d1f-367a-4ad8-b436-592c077be1f1" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.314949 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "6a507d1f-367a-4ad8-b436-592c077be1f1" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.315024 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a507d1f-367a-4ad8-b436-592c077be1f1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "6a507d1f-367a-4ad8-b436-592c077be1f1" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.317834 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "6a507d1f-367a-4ad8-b436-592c077be1f1" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.322537 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-kube-api-access-w5fcn" (OuterVolumeSpecName: "kube-api-access-w5fcn") pod "6a507d1f-367a-4ad8-b436-592c077be1f1" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1"). InnerVolumeSpecName "kube-api-access-w5fcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.322621 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "6a507d1f-367a-4ad8-b436-592c077be1f1" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.326433 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a507d1f-367a-4ad8-b436-592c077be1f1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "6a507d1f-367a-4ad8-b436-592c077be1f1" (UID: "6a507d1f-367a-4ad8-b436-592c077be1f1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.409702 4814 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.409955 4814 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6a507d1f-367a-4ad8-b436-592c077be1f1-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.409967 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5fcn\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-kube-api-access-w5fcn\") on node \"crc\" DevicePath \"\"" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.409978 4814 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6a507d1f-367a-4ad8-b436-592c077be1f1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.409988 4814 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6a507d1f-367a-4ad8-b436-592c077be1f1-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.409996 4814 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6a507d1f-367a-4ad8-b436-592c077be1f1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.410005 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a507d1f-367a-4ad8-b436-592c077be1f1-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.467921 4814 generic.go:334] "Generic (PLEG): container finished" podID="6a507d1f-367a-4ad8-b436-592c077be1f1" containerID="ca077ed28836578350bdfea51d1546b67a7125f0f485d8dc6885f5065612fc0c" exitCode=0 Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.468028 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.468018 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" event={"ID":"6a507d1f-367a-4ad8-b436-592c077be1f1","Type":"ContainerDied","Data":"ca077ed28836578350bdfea51d1546b67a7125f0f485d8dc6885f5065612fc0c"} Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.468224 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hvzjs" event={"ID":"6a507d1f-367a-4ad8-b436-592c077be1f1","Type":"ContainerDied","Data":"8776fb73cf786131290372486852b715ab9ec9217e851da2672e62f5c7a5a7e3"} Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.468290 4814 scope.go:117] "RemoveContainer" containerID="ca077ed28836578350bdfea51d1546b67a7125f0f485d8dc6885f5065612fc0c" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.496576 4814 scope.go:117] "RemoveContainer" containerID="ca077ed28836578350bdfea51d1546b67a7125f0f485d8dc6885f5065612fc0c" Dec 02 14:35:52 crc kubenswrapper[4814]: E1202 14:35:52.496989 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca077ed28836578350bdfea51d1546b67a7125f0f485d8dc6885f5065612fc0c\": container with ID starting with ca077ed28836578350bdfea51d1546b67a7125f0f485d8dc6885f5065612fc0c not found: ID does not exist" containerID="ca077ed28836578350bdfea51d1546b67a7125f0f485d8dc6885f5065612fc0c" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.497022 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca077ed28836578350bdfea51d1546b67a7125f0f485d8dc6885f5065612fc0c"} err="failed to get container status \"ca077ed28836578350bdfea51d1546b67a7125f0f485d8dc6885f5065612fc0c\": rpc error: code = NotFound desc = could not find container \"ca077ed28836578350bdfea51d1546b67a7125f0f485d8dc6885f5065612fc0c\": container with ID starting with ca077ed28836578350bdfea51d1546b67a7125f0f485d8dc6885f5065612fc0c not found: ID does not exist" Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.509788 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hvzjs"] Dec 02 14:35:52 crc kubenswrapper[4814]: I1202 14:35:52.515855 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hvzjs"] Dec 02 14:35:54 crc kubenswrapper[4814]: I1202 14:35:54.155651 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a507d1f-367a-4ad8-b436-592c077be1f1" path="/var/lib/kubelet/pods/6a507d1f-367a-4ad8-b436-592c077be1f1/volumes" Dec 02 14:37:50 crc kubenswrapper[4814]: I1202 14:37:50.650853 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:37:50 crc kubenswrapper[4814]: I1202 14:37:50.651466 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:38:20 crc kubenswrapper[4814]: I1202 14:38:20.651319 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:38:20 crc kubenswrapper[4814]: I1202 14:38:20.652161 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.256701 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf"] Dec 02 14:38:32 crc kubenswrapper[4814]: E1202 14:38:32.257987 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a507d1f-367a-4ad8-b436-592c077be1f1" containerName="registry" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.258015 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a507d1f-367a-4ad8-b436-592c077be1f1" containerName="registry" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.258308 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a507d1f-367a-4ad8-b436-592c077be1f1" containerName="registry" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.260040 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.263678 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.270528 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf"] Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.331120 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4hdb\" (UniqueName: \"kubernetes.io/projected/66a95946-f709-4cb6-8285-982ae589bd5c-kube-api-access-t4hdb\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf\" (UID: \"66a95946-f709-4cb6-8285-982ae589bd5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.331566 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a95946-f709-4cb6-8285-982ae589bd5c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf\" (UID: \"66a95946-f709-4cb6-8285-982ae589bd5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.331930 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a95946-f709-4cb6-8285-982ae589bd5c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf\" (UID: \"66a95946-f709-4cb6-8285-982ae589bd5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.433267 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a95946-f709-4cb6-8285-982ae589bd5c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf\" (UID: \"66a95946-f709-4cb6-8285-982ae589bd5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.433379 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4hdb\" (UniqueName: \"kubernetes.io/projected/66a95946-f709-4cb6-8285-982ae589bd5c-kube-api-access-t4hdb\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf\" (UID: \"66a95946-f709-4cb6-8285-982ae589bd5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.433477 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a95946-f709-4cb6-8285-982ae589bd5c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf\" (UID: \"66a95946-f709-4cb6-8285-982ae589bd5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.434098 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a95946-f709-4cb6-8285-982ae589bd5c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf\" (UID: \"66a95946-f709-4cb6-8285-982ae589bd5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.434375 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a95946-f709-4cb6-8285-982ae589bd5c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf\" (UID: \"66a95946-f709-4cb6-8285-982ae589bd5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.458713 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4hdb\" (UniqueName: \"kubernetes.io/projected/66a95946-f709-4cb6-8285-982ae589bd5c-kube-api-access-t4hdb\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf\" (UID: \"66a95946-f709-4cb6-8285-982ae589bd5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" Dec 02 14:38:32 crc kubenswrapper[4814]: I1202 14:38:32.582103 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" Dec 02 14:38:33 crc kubenswrapper[4814]: I1202 14:38:33.027415 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf"] Dec 02 14:38:33 crc kubenswrapper[4814]: I1202 14:38:33.568982 4814 generic.go:334] "Generic (PLEG): container finished" podID="66a95946-f709-4cb6-8285-982ae589bd5c" containerID="cf5f038dd9567d94c4d2b9142b9db651e02f9980037be7bf47e8ec700e9ecdd2" exitCode=0 Dec 02 14:38:33 crc kubenswrapper[4814]: I1202 14:38:33.569063 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" event={"ID":"66a95946-f709-4cb6-8285-982ae589bd5c","Type":"ContainerDied","Data":"cf5f038dd9567d94c4d2b9142b9db651e02f9980037be7bf47e8ec700e9ecdd2"} Dec 02 14:38:33 crc kubenswrapper[4814]: I1202 14:38:33.569522 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" event={"ID":"66a95946-f709-4cb6-8285-982ae589bd5c","Type":"ContainerStarted","Data":"33822300cf54bcb72bc45d68ee28f2a24cd3960fddd036200986812a45391e14"} Dec 02 14:38:33 crc kubenswrapper[4814]: I1202 14:38:33.572607 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:38:35 crc kubenswrapper[4814]: I1202 14:38:35.584957 4814 generic.go:334] "Generic (PLEG): container finished" podID="66a95946-f709-4cb6-8285-982ae589bd5c" containerID="69f9f5df0c384c1679cc5a0e10d9ffd7fdc7bf2f9a5014e625861679912e98a4" exitCode=0 Dec 02 14:38:35 crc kubenswrapper[4814]: I1202 14:38:35.585032 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" event={"ID":"66a95946-f709-4cb6-8285-982ae589bd5c","Type":"ContainerDied","Data":"69f9f5df0c384c1679cc5a0e10d9ffd7fdc7bf2f9a5014e625861679912e98a4"} Dec 02 14:38:36 crc kubenswrapper[4814]: I1202 14:38:36.595851 4814 generic.go:334] "Generic (PLEG): container finished" podID="66a95946-f709-4cb6-8285-982ae589bd5c" containerID="c6094fdaca006b45e64a10340b0d8da84e276b965be798819a5d45497cd3f61c" exitCode=0 Dec 02 14:38:36 crc kubenswrapper[4814]: I1202 14:38:36.595921 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" event={"ID":"66a95946-f709-4cb6-8285-982ae589bd5c","Type":"ContainerDied","Data":"c6094fdaca006b45e64a10340b0d8da84e276b965be798819a5d45497cd3f61c"} Dec 02 14:38:37 crc kubenswrapper[4814]: I1202 14:38:37.909416 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" Dec 02 14:38:38 crc kubenswrapper[4814]: I1202 14:38:38.012484 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a95946-f709-4cb6-8285-982ae589bd5c-bundle\") pod \"66a95946-f709-4cb6-8285-982ae589bd5c\" (UID: \"66a95946-f709-4cb6-8285-982ae589bd5c\") " Dec 02 14:38:38 crc kubenswrapper[4814]: I1202 14:38:38.012593 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a95946-f709-4cb6-8285-982ae589bd5c-util\") pod \"66a95946-f709-4cb6-8285-982ae589bd5c\" (UID: \"66a95946-f709-4cb6-8285-982ae589bd5c\") " Dec 02 14:38:38 crc kubenswrapper[4814]: I1202 14:38:38.012646 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4hdb\" (UniqueName: \"kubernetes.io/projected/66a95946-f709-4cb6-8285-982ae589bd5c-kube-api-access-t4hdb\") pod \"66a95946-f709-4cb6-8285-982ae589bd5c\" (UID: \"66a95946-f709-4cb6-8285-982ae589bd5c\") " Dec 02 14:38:38 crc kubenswrapper[4814]: I1202 14:38:38.016316 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a95946-f709-4cb6-8285-982ae589bd5c-bundle" (OuterVolumeSpecName: "bundle") pod "66a95946-f709-4cb6-8285-982ae589bd5c" (UID: "66a95946-f709-4cb6-8285-982ae589bd5c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:38:38 crc kubenswrapper[4814]: I1202 14:38:38.019407 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a95946-f709-4cb6-8285-982ae589bd5c-kube-api-access-t4hdb" (OuterVolumeSpecName: "kube-api-access-t4hdb") pod "66a95946-f709-4cb6-8285-982ae589bd5c" (UID: "66a95946-f709-4cb6-8285-982ae589bd5c"). InnerVolumeSpecName "kube-api-access-t4hdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:38:38 crc kubenswrapper[4814]: I1202 14:38:38.035783 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a95946-f709-4cb6-8285-982ae589bd5c-util" (OuterVolumeSpecName: "util") pod "66a95946-f709-4cb6-8285-982ae589bd5c" (UID: "66a95946-f709-4cb6-8285-982ae589bd5c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:38:38 crc kubenswrapper[4814]: I1202 14:38:38.113793 4814 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a95946-f709-4cb6-8285-982ae589bd5c-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:38 crc kubenswrapper[4814]: I1202 14:38:38.113849 4814 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a95946-f709-4cb6-8285-982ae589bd5c-util\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:38 crc kubenswrapper[4814]: I1202 14:38:38.113869 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4hdb\" (UniqueName: \"kubernetes.io/projected/66a95946-f709-4cb6-8285-982ae589bd5c-kube-api-access-t4hdb\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:38 crc kubenswrapper[4814]: I1202 14:38:38.609313 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" event={"ID":"66a95946-f709-4cb6-8285-982ae589bd5c","Type":"ContainerDied","Data":"33822300cf54bcb72bc45d68ee28f2a24cd3960fddd036200986812a45391e14"} Dec 02 14:38:38 crc kubenswrapper[4814]: I1202 14:38:38.609615 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33822300cf54bcb72bc45d68ee28f2a24cd3960fddd036200986812a45391e14" Dec 02 14:38:38 crc kubenswrapper[4814]: I1202 14:38:38.609372 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.252713 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gnpv5"] Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.255180 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovn-controller" containerID="cri-o://db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9" gracePeriod=30 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.255263 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="nbdb" containerID="cri-o://593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff" gracePeriod=30 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.255365 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="northd" containerID="cri-o://9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe" gracePeriod=30 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.255419 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f" gracePeriod=30 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.255461 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="kube-rbac-proxy-node" containerID="cri-o://b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4" gracePeriod=30 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.255503 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovn-acl-logging" containerID="cri-o://a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3" gracePeriod=30 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.255694 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="sbdb" containerID="cri-o://2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374" gracePeriod=30 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.294686 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" containerID="cri-o://8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f" gracePeriod=30 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.605194 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/3.log" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.607699 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovn-acl-logging/0.log" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.608171 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovn-controller/0.log" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.608546 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.642529 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd6tj_733ab60d-6038-4a5b-9120-9c4ef1f47774/kube-multus/2.log" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.642923 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd6tj_733ab60d-6038-4a5b-9120-9c4ef1f47774/kube-multus/1.log" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.642954 4814 generic.go:334] "Generic (PLEG): container finished" podID="733ab60d-6038-4a5b-9120-9c4ef1f47774" containerID="848e89bc2046be5579f5aafbad4aee9b0f7d56fa9305f8e6ccd290ab3578b5ef" exitCode=2 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.642996 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd6tj" event={"ID":"733ab60d-6038-4a5b-9120-9c4ef1f47774","Type":"ContainerDied","Data":"848e89bc2046be5579f5aafbad4aee9b0f7d56fa9305f8e6ccd290ab3578b5ef"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.643028 4814 scope.go:117] "RemoveContainer" containerID="856a317b4636fc96e9833f76c75051122ac8cde18987182481a998ea6c5824db" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.643478 4814 scope.go:117] "RemoveContainer" containerID="848e89bc2046be5579f5aafbad4aee9b0f7d56fa9305f8e6ccd290ab3578b5ef" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.643633 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-cd6tj_openshift-multus(733ab60d-6038-4a5b-9120-9c4ef1f47774)\"" pod="openshift-multus/multus-cd6tj" podUID="733ab60d-6038-4a5b-9120-9c4ef1f47774" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.647566 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovnkube-controller/3.log" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.650185 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovn-acl-logging/0.log" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.650654 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gnpv5_d5cd5566-9b36-421b-8f02-f3d12db9b829/ovn-controller/0.log" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.650971 4814 generic.go:334] "Generic (PLEG): container finished" podID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerID="8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f" exitCode=0 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651175 4814 generic.go:334] "Generic (PLEG): container finished" podID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerID="2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374" exitCode=0 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651335 4814 generic.go:334] "Generic (PLEG): container finished" podID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerID="593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff" exitCode=0 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651419 4814 generic.go:334] "Generic (PLEG): container finished" podID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerID="9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe" exitCode=0 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651481 4814 generic.go:334] "Generic (PLEG): container finished" podID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerID="261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f" exitCode=0 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651550 4814 generic.go:334] "Generic (PLEG): container finished" podID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerID="b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4" exitCode=0 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651610 4814 generic.go:334] "Generic (PLEG): container finished" podID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerID="a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3" exitCode=143 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651681 4814 generic.go:334] "Generic (PLEG): container finished" podID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerID="db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9" exitCode=143 Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651177 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651099 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerDied","Data":"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651910 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerDied","Data":"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651928 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerDied","Data":"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651940 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerDied","Data":"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651950 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerDied","Data":"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651961 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerDied","Data":"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651971 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651981 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651986 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651993 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.651999 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652005 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652011 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652016 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652021 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652026 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652033 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerDied","Data":"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652042 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652048 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652054 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652059 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652065 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652070 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652075 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652080 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652087 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652091 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652098 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerDied","Data":"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652106 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652113 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652118 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652123 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652130 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652135 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652140 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652145 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652150 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652154 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652161 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnpv5" event={"ID":"d5cd5566-9b36-421b-8f02-f3d12db9b829","Type":"ContainerDied","Data":"7c28694b94f1aab9bcdd24717a62591363cf984d31eaf38087b6781e1227eabd"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652169 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652176 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652180 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652185 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652190 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652196 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652202 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652207 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652212 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.652218 4814 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c"} Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.671876 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-js4sn"] Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672128 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672152 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672163 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="kubecfg-setup" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672172 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="kubecfg-setup" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672183 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovn-acl-logging" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672193 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovn-acl-logging" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672204 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672212 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672221 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="nbdb" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672246 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="nbdb" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672261 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="northd" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672270 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="northd" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672281 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672289 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672298 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="sbdb" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672324 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="sbdb" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672338 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a95946-f709-4cb6-8285-982ae589bd5c" containerName="util" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672346 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a95946-f709-4cb6-8285-982ae589bd5c" containerName="util" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672357 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672364 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672371 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672378 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672389 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672396 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672406 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovn-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672413 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovn-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672424 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a95946-f709-4cb6-8285-982ae589bd5c" containerName="extract" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672431 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a95946-f709-4cb6-8285-982ae589bd5c" containerName="extract" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672442 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a95946-f709-4cb6-8285-982ae589bd5c" containerName="pull" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672451 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a95946-f709-4cb6-8285-982ae589bd5c" containerName="pull" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.672460 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="kube-rbac-proxy-node" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672468 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="kube-rbac-proxy-node" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672605 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672617 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="kube-rbac-proxy-node" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672626 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672635 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="northd" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672642 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672651 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="sbdb" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672659 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a95946-f709-4cb6-8285-982ae589bd5c" containerName="extract" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672666 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672676 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovn-acl-logging" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672682 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovn-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672690 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="nbdb" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672870 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.672883 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" containerName="ovnkube-controller" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.674424 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.677758 4814 scope.go:117] "RemoveContainer" containerID="8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.695585 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-systemd-units\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.695735 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-slash\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.695824 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-systemd\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.695888 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-cni-bin\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.695965 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.696045 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovn-node-metrics-cert\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.696115 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-cni-netd\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.696192 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-kubelet\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.696319 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-etc-openvswitch\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.696406 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-env-overrides\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.696485 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-node-log\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.696556 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-var-lib-openvswitch\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.696654 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-log-socket\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.696718 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-run-ovn-kubernetes\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.696804 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh6s8\" (UniqueName: \"kubernetes.io/projected/d5cd5566-9b36-421b-8f02-f3d12db9b829-kube-api-access-lh6s8\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.696876 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-run-netns\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.696940 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovnkube-script-lib\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.697002 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-ovn\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.697094 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovnkube-config\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.697214 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-openvswitch\") pod \"d5cd5566-9b36-421b-8f02-f3d12db9b829\" (UID: \"d5cd5566-9b36-421b-8f02-f3d12db9b829\") " Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.697409 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-run-ovn\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.697491 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-ovn-node-metrics-cert\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.697566 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-systemd-units\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.697644 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-ovnkube-config\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.697722 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78lq5\" (UniqueName: \"kubernetes.io/projected/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-kube-api-access-78lq5\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.697804 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-log-socket\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.697889 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-env-overrides\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.697956 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-node-log\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.698041 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-cni-netd\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.698132 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.698572 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-ovnkube-script-lib\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.698667 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-slash\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.698802 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-run-openvswitch\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.698910 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-etc-openvswitch\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699003 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-run-systemd\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.698448 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.698483 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-slash" (OuterVolumeSpecName: "host-slash") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.698911 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.698942 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699084 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699206 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-run-netns\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699295 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-var-lib-openvswitch\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699336 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-kubelet\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699377 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-cni-bin\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699500 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699584 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699653 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699724 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699811 4814 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699842 4814 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-slash\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699855 4814 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699869 4814 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699959 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.699995 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.700030 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-node-log" (OuterVolumeSpecName: "node-log") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.700062 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.700086 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-log-socket" (OuterVolumeSpecName: "log-socket") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.700129 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.700246 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.700668 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.700711 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.702277 4814 scope.go:117] "RemoveContainer" containerID="695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.708760 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.710751 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5cd5566-9b36-421b-8f02-f3d12db9b829-kube-api-access-lh6s8" (OuterVolumeSpecName: "kube-api-access-lh6s8") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "kube-api-access-lh6s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.718312 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d5cd5566-9b36-421b-8f02-f3d12db9b829" (UID: "d5cd5566-9b36-421b-8f02-f3d12db9b829"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.737179 4814 scope.go:117] "RemoveContainer" containerID="2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.754430 4814 scope.go:117] "RemoveContainer" containerID="593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.770343 4814 scope.go:117] "RemoveContainer" containerID="9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.781953 4814 scope.go:117] "RemoveContainer" containerID="261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.793554 4814 scope.go:117] "RemoveContainer" containerID="b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.800714 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-var-lib-openvswitch\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.800758 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-kubelet\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.800786 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-cni-bin\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.800813 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-run-ovn\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.800835 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-ovn-node-metrics-cert\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.800848 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-var-lib-openvswitch\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.800906 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-kubelet\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.800929 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-cni-bin\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.800941 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-systemd-units\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801005 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-systemd-units\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.800920 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-run-ovn\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801037 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-ovnkube-config\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801074 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78lq5\" (UniqueName: \"kubernetes.io/projected/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-kube-api-access-78lq5\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801149 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-log-socket\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801173 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-env-overrides\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801197 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-node-log\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801217 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-cni-netd\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801273 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801296 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-ovnkube-script-lib\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801313 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-slash\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801342 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-run-openvswitch\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801379 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-etc-openvswitch\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801414 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801417 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-run-systemd\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801464 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801488 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-run-netns\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801552 4814 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801565 4814 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801576 4814 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801591 4814 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-node-log\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801603 4814 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801615 4814 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-log-socket\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801625 4814 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801637 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh6s8\" (UniqueName: \"kubernetes.io/projected/d5cd5566-9b36-421b-8f02-f3d12db9b829-kube-api-access-lh6s8\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801650 4814 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801660 4814 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801669 4814 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801680 4814 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801690 4814 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801701 4814 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801712 4814 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5cd5566-9b36-421b-8f02-f3d12db9b829-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801723 4814 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5cd5566-9b36-421b-8f02-f3d12db9b829-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801752 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-run-netns\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801783 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-node-log\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801797 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-env-overrides\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801795 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-log-socket\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801810 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-cni-netd\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801438 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-run-systemd\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801842 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-run-openvswitch\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801847 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-slash\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801860 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801875 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-etc-openvswitch\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.801883 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-ovnkube-config\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.802122 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-ovnkube-script-lib\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.803813 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-ovn-node-metrics-cert\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.806706 4814 scope.go:117] "RemoveContainer" containerID="a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.819138 4814 scope.go:117] "RemoveContainer" containerID="db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.822219 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78lq5\" (UniqueName: \"kubernetes.io/projected/87cfa8dc-6930-4e9f-bab2-efa096d8ec9b-kube-api-access-78lq5\") pod \"ovnkube-node-js4sn\" (UID: \"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.831067 4814 scope.go:117] "RemoveContainer" containerID="9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.841954 4814 scope.go:117] "RemoveContainer" containerID="8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.842439 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f\": container with ID starting with 8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f not found: ID does not exist" containerID="8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.842481 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f"} err="failed to get container status \"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f\": rpc error: code = NotFound desc = could not find container \"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f\": container with ID starting with 8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.842509 4814 scope.go:117] "RemoveContainer" containerID="695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.842781 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\": container with ID starting with 695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab not found: ID does not exist" containerID="695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.842876 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab"} err="failed to get container status \"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\": rpc error: code = NotFound desc = could not find container \"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\": container with ID starting with 695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.842945 4814 scope.go:117] "RemoveContainer" containerID="2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.843242 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\": container with ID starting with 2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374 not found: ID does not exist" containerID="2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.843275 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374"} err="failed to get container status \"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\": rpc error: code = NotFound desc = could not find container \"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\": container with ID starting with 2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.843299 4814 scope.go:117] "RemoveContainer" containerID="593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.843517 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\": container with ID starting with 593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff not found: ID does not exist" containerID="593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.843540 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff"} err="failed to get container status \"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\": rpc error: code = NotFound desc = could not find container \"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\": container with ID starting with 593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.843553 4814 scope.go:117] "RemoveContainer" containerID="9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.843754 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\": container with ID starting with 9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe not found: ID does not exist" containerID="9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.843779 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe"} err="failed to get container status \"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\": rpc error: code = NotFound desc = could not find container \"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\": container with ID starting with 9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.843796 4814 scope.go:117] "RemoveContainer" containerID="261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.843982 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\": container with ID starting with 261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f not found: ID does not exist" containerID="261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.844006 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f"} err="failed to get container status \"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\": rpc error: code = NotFound desc = could not find container \"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\": container with ID starting with 261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.844023 4814 scope.go:117] "RemoveContainer" containerID="b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.844223 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\": container with ID starting with b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4 not found: ID does not exist" containerID="b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.844257 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4"} err="failed to get container status \"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\": rpc error: code = NotFound desc = could not find container \"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\": container with ID starting with b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.844271 4814 scope.go:117] "RemoveContainer" containerID="a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.844460 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\": container with ID starting with a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3 not found: ID does not exist" containerID="a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.844493 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3"} err="failed to get container status \"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\": rpc error: code = NotFound desc = could not find container \"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\": container with ID starting with a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.844510 4814 scope.go:117] "RemoveContainer" containerID="db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.844718 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\": container with ID starting with db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9 not found: ID does not exist" containerID="db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.844741 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9"} err="failed to get container status \"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\": rpc error: code = NotFound desc = could not find container \"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\": container with ID starting with db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.844754 4814 scope.go:117] "RemoveContainer" containerID="9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c" Dec 02 14:38:43 crc kubenswrapper[4814]: E1202 14:38:43.844925 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\": container with ID starting with 9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c not found: ID does not exist" containerID="9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.844945 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c"} err="failed to get container status \"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\": rpc error: code = NotFound desc = could not find container \"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\": container with ID starting with 9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.844958 4814 scope.go:117] "RemoveContainer" containerID="8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.846361 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f"} err="failed to get container status \"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f\": rpc error: code = NotFound desc = could not find container \"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f\": container with ID starting with 8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.846384 4814 scope.go:117] "RemoveContainer" containerID="695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.846654 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab"} err="failed to get container status \"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\": rpc error: code = NotFound desc = could not find container \"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\": container with ID starting with 695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.846675 4814 scope.go:117] "RemoveContainer" containerID="2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.846927 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374"} err="failed to get container status \"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\": rpc error: code = NotFound desc = could not find container \"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\": container with ID starting with 2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.846944 4814 scope.go:117] "RemoveContainer" containerID="593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.847096 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff"} err="failed to get container status \"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\": rpc error: code = NotFound desc = could not find container \"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\": container with ID starting with 593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.847116 4814 scope.go:117] "RemoveContainer" containerID="9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.847295 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe"} err="failed to get container status \"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\": rpc error: code = NotFound desc = could not find container \"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\": container with ID starting with 9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.847309 4814 scope.go:117] "RemoveContainer" containerID="261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.847511 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f"} err="failed to get container status \"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\": rpc error: code = NotFound desc = could not find container \"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\": container with ID starting with 261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.847523 4814 scope.go:117] "RemoveContainer" containerID="b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.847658 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4"} err="failed to get container status \"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\": rpc error: code = NotFound desc = could not find container \"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\": container with ID starting with b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.847670 4814 scope.go:117] "RemoveContainer" containerID="a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.847797 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3"} err="failed to get container status \"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\": rpc error: code = NotFound desc = could not find container \"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\": container with ID starting with a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.847808 4814 scope.go:117] "RemoveContainer" containerID="db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.847956 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9"} err="failed to get container status \"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\": rpc error: code = NotFound desc = could not find container \"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\": container with ID starting with db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.847969 4814 scope.go:117] "RemoveContainer" containerID="9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.848098 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c"} err="failed to get container status \"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\": rpc error: code = NotFound desc = could not find container \"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\": container with ID starting with 9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.848110 4814 scope.go:117] "RemoveContainer" containerID="8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.848249 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f"} err="failed to get container status \"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f\": rpc error: code = NotFound desc = could not find container \"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f\": container with ID starting with 8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.848261 4814 scope.go:117] "RemoveContainer" containerID="695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.848401 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab"} err="failed to get container status \"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\": rpc error: code = NotFound desc = could not find container \"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\": container with ID starting with 695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.848418 4814 scope.go:117] "RemoveContainer" containerID="2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.848565 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374"} err="failed to get container status \"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\": rpc error: code = NotFound desc = could not find container \"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\": container with ID starting with 2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.848583 4814 scope.go:117] "RemoveContainer" containerID="593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.848718 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff"} err="failed to get container status \"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\": rpc error: code = NotFound desc = could not find container \"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\": container with ID starting with 593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.848742 4814 scope.go:117] "RemoveContainer" containerID="9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.848978 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe"} err="failed to get container status \"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\": rpc error: code = NotFound desc = could not find container \"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\": container with ID starting with 9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.848995 4814 scope.go:117] "RemoveContainer" containerID="261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.849241 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f"} err="failed to get container status \"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\": rpc error: code = NotFound desc = could not find container \"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\": container with ID starting with 261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.849262 4814 scope.go:117] "RemoveContainer" containerID="b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.849427 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4"} err="failed to get container status \"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\": rpc error: code = NotFound desc = could not find container \"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\": container with ID starting with b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.849453 4814 scope.go:117] "RemoveContainer" containerID="a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.849614 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3"} err="failed to get container status \"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\": rpc error: code = NotFound desc = could not find container \"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\": container with ID starting with a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.849637 4814 scope.go:117] "RemoveContainer" containerID="db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.849773 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9"} err="failed to get container status \"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\": rpc error: code = NotFound desc = could not find container \"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\": container with ID starting with db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.849790 4814 scope.go:117] "RemoveContainer" containerID="9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.849931 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c"} err="failed to get container status \"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\": rpc error: code = NotFound desc = could not find container \"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\": container with ID starting with 9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.849947 4814 scope.go:117] "RemoveContainer" containerID="8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.850075 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f"} err="failed to get container status \"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f\": rpc error: code = NotFound desc = could not find container \"8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f\": container with ID starting with 8053d8a3a689a50c54664e35a01fb83f0ade5686987a36f9ac14a4a2222a911f not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.850091 4814 scope.go:117] "RemoveContainer" containerID="695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.850252 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab"} err="failed to get container status \"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\": rpc error: code = NotFound desc = could not find container \"695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab\": container with ID starting with 695d0f322125708f1adc6196d4bb99897b78aea20109ed4dcefaeffb71b3feab not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.850268 4814 scope.go:117] "RemoveContainer" containerID="2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.850420 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374"} err="failed to get container status \"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\": rpc error: code = NotFound desc = could not find container \"2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374\": container with ID starting with 2230eedf187ecfa7fd29d35ef15745c22d24528e6c2dfc6cd413b7cd25ced374 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.850438 4814 scope.go:117] "RemoveContainer" containerID="593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.850589 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff"} err="failed to get container status \"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\": rpc error: code = NotFound desc = could not find container \"593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff\": container with ID starting with 593737d806ff227c4a59089108172a104080af2aa0851707c0e6548e18dff9ff not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.850607 4814 scope.go:117] "RemoveContainer" containerID="9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.850736 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe"} err="failed to get container status \"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\": rpc error: code = NotFound desc = could not find container \"9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe\": container with ID starting with 9af092e8a8931d787fcf70fe309aa7695898f1e3d8a72c72c95ed782cae781fe not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.850751 4814 scope.go:117] "RemoveContainer" containerID="261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.850881 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f"} err="failed to get container status \"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\": rpc error: code = NotFound desc = could not find container \"261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f\": container with ID starting with 261fb65784f1c522857dbb70db778369be17d1304fcab700a854d1b0311c1f1f not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.850897 4814 scope.go:117] "RemoveContainer" containerID="b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.851024 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4"} err="failed to get container status \"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\": rpc error: code = NotFound desc = could not find container \"b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4\": container with ID starting with b7ba9ca9e2a947d80350efaaf73afa4016ee029785c235f7ee242c5c27bb3bf4 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.851040 4814 scope.go:117] "RemoveContainer" containerID="a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.851166 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3"} err="failed to get container status \"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\": rpc error: code = NotFound desc = could not find container \"a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3\": container with ID starting with a21dcce2bf597fcf3d2da328a149305eb4b119ec51efc3cf644e6bb1c7b9a1c3 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.851182 4814 scope.go:117] "RemoveContainer" containerID="db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.851360 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9"} err="failed to get container status \"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\": rpc error: code = NotFound desc = could not find container \"db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9\": container with ID starting with db9d90fe434621d9a938aad6074e49c94aba4fe5df2110224c39a29daa58d6a9 not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.851382 4814 scope.go:117] "RemoveContainer" containerID="9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.851540 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c"} err="failed to get container status \"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\": rpc error: code = NotFound desc = could not find container \"9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c\": container with ID starting with 9db6942854791f71dd3ca594f9230aa4f2a8a8f9266c7ee946b1bf8764094e1c not found: ID does not exist" Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.984516 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gnpv5"] Dec 02 14:38:43 crc kubenswrapper[4814]: I1202 14:38:43.990782 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:44 crc kubenswrapper[4814]: I1202 14:38:44.003972 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gnpv5"] Dec 02 14:38:44 crc kubenswrapper[4814]: I1202 14:38:44.153558 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5cd5566-9b36-421b-8f02-f3d12db9b829" path="/var/lib/kubelet/pods/d5cd5566-9b36-421b-8f02-f3d12db9b829/volumes" Dec 02 14:38:44 crc kubenswrapper[4814]: I1202 14:38:44.658008 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd6tj_733ab60d-6038-4a5b-9120-9c4ef1f47774/kube-multus/2.log" Dec 02 14:38:44 crc kubenswrapper[4814]: I1202 14:38:44.659096 4814 generic.go:334] "Generic (PLEG): container finished" podID="87cfa8dc-6930-4e9f-bab2-efa096d8ec9b" containerID="2b8263f348a9fc4cabdb44bdf0adb73872a73e9022b7b2d478d45d014ab10cff" exitCode=0 Dec 02 14:38:44 crc kubenswrapper[4814]: I1202 14:38:44.659125 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" event={"ID":"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b","Type":"ContainerDied","Data":"2b8263f348a9fc4cabdb44bdf0adb73872a73e9022b7b2d478d45d014ab10cff"} Dec 02 14:38:44 crc kubenswrapper[4814]: I1202 14:38:44.659152 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" event={"ID":"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b","Type":"ContainerStarted","Data":"d821ce5f56380796c018b045b69fcb6dab15ad78d45418333fe484205e1d6f83"} Dec 02 14:38:45 crc kubenswrapper[4814]: I1202 14:38:45.666690 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" event={"ID":"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b","Type":"ContainerStarted","Data":"5ab48a63305dadb9a084e439bcdcce64ec0d236a7634b20e7a2e416cd781b05e"} Dec 02 14:38:45 crc kubenswrapper[4814]: I1202 14:38:45.667145 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" event={"ID":"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b","Type":"ContainerStarted","Data":"a585e1fe56892c3e8936878a8f72e747b6d69fdf3a23660093a97c8e398b9845"} Dec 02 14:38:45 crc kubenswrapper[4814]: I1202 14:38:45.667156 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" event={"ID":"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b","Type":"ContainerStarted","Data":"962a427a943f8922a1e3352c4ded8edf82180abdfe09ff87208177f7715e5cd2"} Dec 02 14:38:45 crc kubenswrapper[4814]: I1202 14:38:45.667163 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" event={"ID":"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b","Type":"ContainerStarted","Data":"b800d120b1e1f95f8e743e921a092139a8b8dd80e379eee1ef0cba60a6a90733"} Dec 02 14:38:45 crc kubenswrapper[4814]: I1202 14:38:45.667173 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" event={"ID":"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b","Type":"ContainerStarted","Data":"efed2b042ea3496f7e16bb6ff1db02abec6e08301d31d5e3cca46fac13e673a9"} Dec 02 14:38:45 crc kubenswrapper[4814]: I1202 14:38:45.667181 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" event={"ID":"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b","Type":"ContainerStarted","Data":"92f872ccd2395e8d3287d3639cdc54ba3607812de005099f66751f9d77bb0f55"} Dec 02 14:38:48 crc kubenswrapper[4814]: I1202 14:38:48.688317 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" event={"ID":"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b","Type":"ContainerStarted","Data":"b3776ab7d0cb26622398e6b0f05ac5da5b2b607d113a110c32b789c0e00d7eff"} Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.056649 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff"] Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.057692 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.059041 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-c4qxz" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.060506 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.061174 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.089048 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6qj5\" (UniqueName: \"kubernetes.io/projected/5f531683-ec7c-4da1-a867-ddc54e0ac5ee-kube-api-access-n6qj5\") pod \"obo-prometheus-operator-668cf9dfbb-jvfff\" (UID: \"5f531683-ec7c-4da1-a867-ddc54e0ac5ee\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.190376 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6qj5\" (UniqueName: \"kubernetes.io/projected/5f531683-ec7c-4da1-a867-ddc54e0ac5ee-kube-api-access-n6qj5\") pod \"obo-prometheus-operator-668cf9dfbb-jvfff\" (UID: \"5f531683-ec7c-4da1-a867-ddc54e0ac5ee\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.197076 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d"] Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.197807 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:50 crc kubenswrapper[4814]: W1202 14:38:50.199408 4814 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Dec 02 14:38:50 crc kubenswrapper[4814]: E1202 14:38:50.199456 4814 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"obo-prometheus-operator-admission-webhook-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 14:38:50 crc kubenswrapper[4814]: W1202 14:38:50.201551 4814 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-gxl2d": failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-dockercfg-gxl2d" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Dec 02 14:38:50 crc kubenswrapper[4814]: E1202 14:38:50.201587 4814 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-gxl2d\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"obo-prometheus-operator-admission-webhook-dockercfg-gxl2d\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.221067 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6qj5\" (UniqueName: \"kubernetes.io/projected/5f531683-ec7c-4da1-a867-ddc54e0ac5ee-kube-api-access-n6qj5\") pod \"obo-prometheus-operator-668cf9dfbb-jvfff\" (UID: \"5f531683-ec7c-4da1-a867-ddc54e0ac5ee\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.224570 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh"] Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.225267 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.291652 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d\" (UID: \"7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.291714 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d\" (UID: \"7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.291732 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b27a473-e399-48d4-b811-564ad42ff316-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh\" (UID: \"7b27a473-e399-48d4-b811-564ad42ff316\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.292013 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b27a473-e399-48d4-b811-564ad42ff316-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh\" (UID: \"7b27a473-e399-48d4-b811-564ad42ff316\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.374068 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.389739 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-pf955"] Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.390449 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.392045 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-p5rrc" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.392708 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b27a473-e399-48d4-b811-564ad42ff316-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh\" (UID: \"7b27a473-e399-48d4-b811-564ad42ff316\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.392740 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d\" (UID: \"7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.392772 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d\" (UID: \"7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.392788 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b27a473-e399-48d4-b811-564ad42ff316-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh\" (UID: \"7b27a473-e399-48d4-b811-564ad42ff316\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.393003 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 02 14:38:50 crc kubenswrapper[4814]: E1202 14:38:50.409839 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators_5f531683-ec7c-4da1-a867-ddc54e0ac5ee_0(0ba4c99c7e6fb3459930269bf5359039b0540920ee81f265126ac407973d130e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:38:50 crc kubenswrapper[4814]: E1202 14:38:50.409964 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators_5f531683-ec7c-4da1-a867-ddc54e0ac5ee_0(0ba4c99c7e6fb3459930269bf5359039b0540920ee81f265126ac407973d130e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:38:50 crc kubenswrapper[4814]: E1202 14:38:50.410054 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators_5f531683-ec7c-4da1-a867-ddc54e0ac5ee_0(0ba4c99c7e6fb3459930269bf5359039b0540920ee81f265126ac407973d130e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:38:50 crc kubenswrapper[4814]: E1202 14:38:50.410157 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators(5f531683-ec7c-4da1-a867-ddc54e0ac5ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators(5f531683-ec7c-4da1-a867-ddc54e0ac5ee)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators_5f531683-ec7c-4da1-a867-ddc54e0ac5ee_0(0ba4c99c7e6fb3459930269bf5359039b0540920ee81f265126ac407973d130e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" podUID="5f531683-ec7c-4da1-a867-ddc54e0ac5ee" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.570102 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-zjnzp"] Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.570736 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.575260 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-sh749" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.606949 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4hhk\" (UniqueName: \"kubernetes.io/projected/564529e2-c4c3-4a3a-8b02-7dc44894e427-kube-api-access-f4hhk\") pod \"perses-operator-5446b9c989-zjnzp\" (UID: \"564529e2-c4c3-4a3a-8b02-7dc44894e427\") " pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.607015 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/564529e2-c4c3-4a3a-8b02-7dc44894e427-openshift-service-ca\") pod \"perses-operator-5446b9c989-zjnzp\" (UID: \"564529e2-c4c3-4a3a-8b02-7dc44894e427\") " pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.607086 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4635ec6-50f3-4e74-9cc9-e3abbe7461fe-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-pf955\" (UID: \"b4635ec6-50f3-4e74-9cc9-e3abbe7461fe\") " pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.607132 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9dzt\" (UniqueName: \"kubernetes.io/projected/b4635ec6-50f3-4e74-9cc9-e3abbe7461fe-kube-api-access-z9dzt\") pod \"observability-operator-d8bb48f5d-pf955\" (UID: \"b4635ec6-50f3-4e74-9cc9-e3abbe7461fe\") " pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.650746 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.650811 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.650863 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.651478 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2845690d7fa975a6fc9bee687fba5b5a81fdaa565ae0b556f8b2ea0b2d8f3b40"} pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.651545 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" containerID="cri-o://2845690d7fa975a6fc9bee687fba5b5a81fdaa565ae0b556f8b2ea0b2d8f3b40" gracePeriod=600 Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.702825 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" event={"ID":"87cfa8dc-6930-4e9f-bab2-efa096d8ec9b","Type":"ContainerStarted","Data":"055c41d85845f39f950b17fd328b3e477597106c1ea55e78ae4fb1caae532c2a"} Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.703208 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.703272 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.708372 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9dzt\" (UniqueName: \"kubernetes.io/projected/b4635ec6-50f3-4e74-9cc9-e3abbe7461fe-kube-api-access-z9dzt\") pod \"observability-operator-d8bb48f5d-pf955\" (UID: \"b4635ec6-50f3-4e74-9cc9-e3abbe7461fe\") " pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.708577 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4hhk\" (UniqueName: \"kubernetes.io/projected/564529e2-c4c3-4a3a-8b02-7dc44894e427-kube-api-access-f4hhk\") pod \"perses-operator-5446b9c989-zjnzp\" (UID: \"564529e2-c4c3-4a3a-8b02-7dc44894e427\") " pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.708771 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/564529e2-c4c3-4a3a-8b02-7dc44894e427-openshift-service-ca\") pod \"perses-operator-5446b9c989-zjnzp\" (UID: \"564529e2-c4c3-4a3a-8b02-7dc44894e427\") " pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.709039 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4635ec6-50f3-4e74-9cc9-e3abbe7461fe-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-pf955\" (UID: \"b4635ec6-50f3-4e74-9cc9-e3abbe7461fe\") " pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.709714 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/564529e2-c4c3-4a3a-8b02-7dc44894e427-openshift-service-ca\") pod \"perses-operator-5446b9c989-zjnzp\" (UID: \"564529e2-c4c3-4a3a-8b02-7dc44894e427\") " pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.713986 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4635ec6-50f3-4e74-9cc9-e3abbe7461fe-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-pf955\" (UID: \"b4635ec6-50f3-4e74-9cc9-e3abbe7461fe\") " pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.734667 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4hhk\" (UniqueName: \"kubernetes.io/projected/564529e2-c4c3-4a3a-8b02-7dc44894e427-kube-api-access-f4hhk\") pod \"perses-operator-5446b9c989-zjnzp\" (UID: \"564529e2-c4c3-4a3a-8b02-7dc44894e427\") " pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.741067 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9dzt\" (UniqueName: \"kubernetes.io/projected/b4635ec6-50f3-4e74-9cc9-e3abbe7461fe-kube-api-access-z9dzt\") pod \"observability-operator-d8bb48f5d-pf955\" (UID: \"b4635ec6-50f3-4e74-9cc9-e3abbe7461fe\") " pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.750931 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.788148 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" podStartSLOduration=7.788133226 podStartE2EDuration="7.788133226s" podCreationTimestamp="2025-12-02 14:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:38:50.735082788 +0000 UTC m=+592.859444992" watchObservedRunningTime="2025-12-02 14:38:50.788133226 +0000 UTC m=+592.912495420" Dec 02 14:38:50 crc kubenswrapper[4814]: I1202 14:38:50.886029 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:38:50 crc kubenswrapper[4814]: E1202 14:38:50.908007 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-zjnzp_openshift-operators_564529e2-c4c3-4a3a-8b02-7dc44894e427_0(6b94d537bb972af35f2939390eb0fa8391b6fb287014153a7d8e580a2c2e2fea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:38:50 crc kubenswrapper[4814]: E1202 14:38:50.908073 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-zjnzp_openshift-operators_564529e2-c4c3-4a3a-8b02-7dc44894e427_0(6b94d537bb972af35f2939390eb0fa8391b6fb287014153a7d8e580a2c2e2fea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:38:50 crc kubenswrapper[4814]: E1202 14:38:50.908095 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-zjnzp_openshift-operators_564529e2-c4c3-4a3a-8b02-7dc44894e427_0(6b94d537bb972af35f2939390eb0fa8391b6fb287014153a7d8e580a2c2e2fea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:38:50 crc kubenswrapper[4814]: E1202 14:38:50.908144 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-zjnzp_openshift-operators(564529e2-c4c3-4a3a-8b02-7dc44894e427)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-zjnzp_openshift-operators(564529e2-c4c3-4a3a-8b02-7dc44894e427)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-zjnzp_openshift-operators_564529e2-c4c3-4a3a-8b02-7dc44894e427_0(6b94d537bb972af35f2939390eb0fa8391b6fb287014153a7d8e580a2c2e2fea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" podUID="564529e2-c4c3-4a3a-8b02-7dc44894e427" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.014681 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.053027 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-pf955_openshift-operators_b4635ec6-50f3-4e74-9cc9-e3abbe7461fe_0(094c1ce8370a7863eb81879c1b72a567d8136af74cc294f6a266ab1f6ba63f07): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.053340 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-pf955_openshift-operators_b4635ec6-50f3-4e74-9cc9-e3abbe7461fe_0(094c1ce8370a7863eb81879c1b72a567d8136af74cc294f6a266ab1f6ba63f07): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.053362 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-pf955_openshift-operators_b4635ec6-50f3-4e74-9cc9-e3abbe7461fe_0(094c1ce8370a7863eb81879c1b72a567d8136af74cc294f6a266ab1f6ba63f07): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.053406 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-pf955_openshift-operators(b4635ec6-50f3-4e74-9cc9-e3abbe7461fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-pf955_openshift-operators(b4635ec6-50f3-4e74-9cc9-e3abbe7461fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-pf955_openshift-operators_b4635ec6-50f3-4e74-9cc9-e3abbe7461fe_0(094c1ce8370a7863eb81879c1b72a567d8136af74cc294f6a266ab1f6ba63f07): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" podUID="b4635ec6-50f3-4e74-9cc9-e3abbe7461fe" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.282354 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-gxl2d" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.328101 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d"] Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.333338 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh"] Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.340338 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-zjnzp"] Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.348469 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-pf955"] Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.356869 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff"] Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.357022 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.357396 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.378276 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators_5f531683-ec7c-4da1-a867-ddc54e0ac5ee_0(265e30c1183e7ae1ebaa981dcc9d2c21a1b55a5abbdc2ec393c40068408576b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.378330 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators_5f531683-ec7c-4da1-a867-ddc54e0ac5ee_0(265e30c1183e7ae1ebaa981dcc9d2c21a1b55a5abbdc2ec393c40068408576b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.378353 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators_5f531683-ec7c-4da1-a867-ddc54e0ac5ee_0(265e30c1183e7ae1ebaa981dcc9d2c21a1b55a5abbdc2ec393c40068408576b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.378391 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators(5f531683-ec7c-4da1-a867-ddc54e0ac5ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators(5f531683-ec7c-4da1-a867-ddc54e0ac5ee)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators_5f531683-ec7c-4da1-a867-ddc54e0ac5ee_0(265e30c1183e7ae1ebaa981dcc9d2c21a1b55a5abbdc2ec393c40068408576b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" podUID="5f531683-ec7c-4da1-a867-ddc54e0ac5ee" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.393053 4814 secret.go:188] Couldn't get secret openshift-operators/obo-prometheus-operator-admission-webhook-service-cert: failed to sync secret cache: timed out waiting for the condition Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.393184 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b27a473-e399-48d4-b811-564ad42ff316-webhook-cert podName:7b27a473-e399-48d4-b811-564ad42ff316 nodeName:}" failed. No retries permitted until 2025-12-02 14:38:51.893154655 +0000 UTC m=+594.017516889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7b27a473-e399-48d4-b811-564ad42ff316-webhook-cert") pod "obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" (UID: "7b27a473-e399-48d4-b811-564ad42ff316") : failed to sync secret cache: timed out waiting for the condition Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.393585 4814 secret.go:188] Couldn't get secret openshift-operators/obo-prometheus-operator-admission-webhook-service-cert: failed to sync secret cache: timed out waiting for the condition Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.393644 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b27a473-e399-48d4-b811-564ad42ff316-apiservice-cert podName:7b27a473-e399-48d4-b811-564ad42ff316 nodeName:}" failed. No retries permitted until 2025-12-02 14:38:51.893628717 +0000 UTC m=+594.017990961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7b27a473-e399-48d4-b811-564ad42ff316-apiservice-cert") pod "obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" (UID: "7b27a473-e399-48d4-b811-564ad42ff316") : failed to sync secret cache: timed out waiting for the condition Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.393671 4814 secret.go:188] Couldn't get secret openshift-operators/obo-prometheus-operator-admission-webhook-service-cert: failed to sync secret cache: timed out waiting for the condition Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.393707 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855-apiservice-cert podName:7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855 nodeName:}" failed. No retries permitted until 2025-12-02 14:38:51.893695109 +0000 UTC m=+594.018057343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855-apiservice-cert") pod "obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" (UID: "7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855") : failed to sync secret cache: timed out waiting for the condition Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.393752 4814 secret.go:188] Couldn't get secret openshift-operators/obo-prometheus-operator-admission-webhook-service-cert: failed to sync secret cache: timed out waiting for the condition Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.393800 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855-webhook-cert podName:7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855 nodeName:}" failed. No retries permitted until 2025-12-02 14:38:51.893787111 +0000 UTC m=+594.018149355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855-webhook-cert") pod "obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" (UID: "7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855") : failed to sync secret cache: timed out waiting for the condition Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.545806 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.711982 4814 generic.go:334] "Generic (PLEG): container finished" podID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerID="2845690d7fa975a6fc9bee687fba5b5a81fdaa565ae0b556f8b2ea0b2d8f3b40" exitCode=0 Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.712021 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerDied","Data":"2845690d7fa975a6fc9bee687fba5b5a81fdaa565ae0b556f8b2ea0b2d8f3b40"} Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.712370 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.712385 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.712400 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"a1ed9f1bb911b9ac82f3b2b6f82d455cb053025789269a61268761d5183751b3"} Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.712433 4814 scope.go:117] "RemoveContainer" containerID="4d6d34edeed3132fe16646589dc9a6700a81ed66231cd064d17aa7b5c701ae63" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.712864 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.713116 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.713213 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.754264 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-pf955_openshift-operators_b4635ec6-50f3-4e74-9cc9-e3abbe7461fe_0(5fb315fc4429a06a0a64860befb36608dbb1d9a5cf23c61626402b5da6e4b7fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.754326 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-pf955_openshift-operators_b4635ec6-50f3-4e74-9cc9-e3abbe7461fe_0(5fb315fc4429a06a0a64860befb36608dbb1d9a5cf23c61626402b5da6e4b7fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.754347 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-pf955_openshift-operators_b4635ec6-50f3-4e74-9cc9-e3abbe7461fe_0(5fb315fc4429a06a0a64860befb36608dbb1d9a5cf23c61626402b5da6e4b7fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.754386 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-pf955_openshift-operators(b4635ec6-50f3-4e74-9cc9-e3abbe7461fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-pf955_openshift-operators(b4635ec6-50f3-4e74-9cc9-e3abbe7461fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-pf955_openshift-operators_b4635ec6-50f3-4e74-9cc9-e3abbe7461fe_0(5fb315fc4429a06a0a64860befb36608dbb1d9a5cf23c61626402b5da6e4b7fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" podUID="b4635ec6-50f3-4e74-9cc9-e3abbe7461fe" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.754271 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-zjnzp_openshift-operators_564529e2-c4c3-4a3a-8b02-7dc44894e427_0(cbe948f7dbd2d60b832dd5468e510628f38cb5bd5da649cbd33a1d2047d1c98c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.754500 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-zjnzp_openshift-operators_564529e2-c4c3-4a3a-8b02-7dc44894e427_0(cbe948f7dbd2d60b832dd5468e510628f38cb5bd5da649cbd33a1d2047d1c98c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.754524 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-zjnzp_openshift-operators_564529e2-c4c3-4a3a-8b02-7dc44894e427_0(cbe948f7dbd2d60b832dd5468e510628f38cb5bd5da649cbd33a1d2047d1c98c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:38:51 crc kubenswrapper[4814]: E1202 14:38:51.754563 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-zjnzp_openshift-operators(564529e2-c4c3-4a3a-8b02-7dc44894e427)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-zjnzp_openshift-operators(564529e2-c4c3-4a3a-8b02-7dc44894e427)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-zjnzp_openshift-operators_564529e2-c4c3-4a3a-8b02-7dc44894e427_0(cbe948f7dbd2d60b832dd5468e510628f38cb5bd5da649cbd33a1d2047d1c98c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" podUID="564529e2-c4c3-4a3a-8b02-7dc44894e427" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.766024 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.922053 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b27a473-e399-48d4-b811-564ad42ff316-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh\" (UID: \"7b27a473-e399-48d4-b811-564ad42ff316\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.922097 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d\" (UID: \"7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.922128 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d\" (UID: \"7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.922149 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b27a473-e399-48d4-b811-564ad42ff316-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh\" (UID: \"7b27a473-e399-48d4-b811-564ad42ff316\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.927023 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b27a473-e399-48d4-b811-564ad42ff316-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh\" (UID: \"7b27a473-e399-48d4-b811-564ad42ff316\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.927485 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d\" (UID: \"7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.927522 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b27a473-e399-48d4-b811-564ad42ff316-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh\" (UID: \"7b27a473-e399-48d4-b811-564ad42ff316\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:51 crc kubenswrapper[4814]: I1202 14:38:51.928609 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d\" (UID: \"7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:52 crc kubenswrapper[4814]: I1202 14:38:52.010438 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.036491 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators_7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855_0(4f2bfbdca4c759ef7a4ae3f62e880e52320af2a2e1fce4a2c0646914eb11085c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.036551 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators_7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855_0(4f2bfbdca4c759ef7a4ae3f62e880e52320af2a2e1fce4a2c0646914eb11085c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.036575 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators_7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855_0(4f2bfbdca4c759ef7a4ae3f62e880e52320af2a2e1fce4a2c0646914eb11085c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.036623 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators(7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators(7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators_7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855_0(4f2bfbdca4c759ef7a4ae3f62e880e52320af2a2e1fce4a2c0646914eb11085c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" podUID="7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855" Dec 02 14:38:52 crc kubenswrapper[4814]: I1202 14:38:52.054900 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.081005 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators_7b27a473-e399-48d4-b811-564ad42ff316_0(b154b9acbe06638f5104e2583ea882b4737b8ec98751f6d46c4d7138f149edcc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.081087 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators_7b27a473-e399-48d4-b811-564ad42ff316_0(b154b9acbe06638f5104e2583ea882b4737b8ec98751f6d46c4d7138f149edcc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.081112 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators_7b27a473-e399-48d4-b811-564ad42ff316_0(b154b9acbe06638f5104e2583ea882b4737b8ec98751f6d46c4d7138f149edcc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.081164 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators(7b27a473-e399-48d4-b811-564ad42ff316)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators(7b27a473-e399-48d4-b811-564ad42ff316)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators_7b27a473-e399-48d4-b811-564ad42ff316_0(b154b9acbe06638f5104e2583ea882b4737b8ec98751f6d46c4d7138f149edcc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" podUID="7b27a473-e399-48d4-b811-564ad42ff316" Dec 02 14:38:52 crc kubenswrapper[4814]: I1202 14:38:52.719277 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:52 crc kubenswrapper[4814]: I1202 14:38:52.719388 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:52 crc kubenswrapper[4814]: I1202 14:38:52.719870 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:52 crc kubenswrapper[4814]: I1202 14:38:52.720021 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.752573 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators_7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855_0(0d384da03534359991c417f709c228ab5e2515fff666528eb0c170249d97aead): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.752646 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators_7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855_0(0d384da03534359991c417f709c228ab5e2515fff666528eb0c170249d97aead): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.752675 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators_7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855_0(0d384da03534359991c417f709c228ab5e2515fff666528eb0c170249d97aead): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.752730 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators(7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators(7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators_7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855_0(0d384da03534359991c417f709c228ab5e2515fff666528eb0c170249d97aead): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" podUID="7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.762737 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators_7b27a473-e399-48d4-b811-564ad42ff316_0(df63096c8b4568a481a7c98725fd17e38bb977bb14be791c9c93358c2fbb29a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.762798 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators_7b27a473-e399-48d4-b811-564ad42ff316_0(df63096c8b4568a481a7c98725fd17e38bb977bb14be791c9c93358c2fbb29a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.762822 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators_7b27a473-e399-48d4-b811-564ad42ff316_0(df63096c8b4568a481a7c98725fd17e38bb977bb14be791c9c93358c2fbb29a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:38:52 crc kubenswrapper[4814]: E1202 14:38:52.762865 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators(7b27a473-e399-48d4-b811-564ad42ff316)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators(7b27a473-e399-48d4-b811-564ad42ff316)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators_7b27a473-e399-48d4-b811-564ad42ff316_0(df63096c8b4568a481a7c98725fd17e38bb977bb14be791c9c93358c2fbb29a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" podUID="7b27a473-e399-48d4-b811-564ad42ff316" Dec 02 14:38:56 crc kubenswrapper[4814]: I1202 14:38:56.144513 4814 scope.go:117] "RemoveContainer" containerID="848e89bc2046be5579f5aafbad4aee9b0f7d56fa9305f8e6ccd290ab3578b5ef" Dec 02 14:38:56 crc kubenswrapper[4814]: E1202 14:38:56.145363 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-cd6tj_openshift-multus(733ab60d-6038-4a5b-9120-9c4ef1f47774)\"" pod="openshift-multus/multus-cd6tj" podUID="733ab60d-6038-4a5b-9120-9c4ef1f47774" Dec 02 14:39:03 crc kubenswrapper[4814]: I1202 14:39:03.143935 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:39:03 crc kubenswrapper[4814]: I1202 14:39:03.144001 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:39:03 crc kubenswrapper[4814]: I1202 14:39:03.144580 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:39:03 crc kubenswrapper[4814]: I1202 14:39:03.144846 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:39:03 crc kubenswrapper[4814]: E1202 14:39:03.179256 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-pf955_openshift-operators_b4635ec6-50f3-4e74-9cc9-e3abbe7461fe_0(5cfc2c7de78ccd59517a2baa8572d619cacdaa1d2208af2a71a27b514a05aa06): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:39:03 crc kubenswrapper[4814]: E1202 14:39:03.179336 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-pf955_openshift-operators_b4635ec6-50f3-4e74-9cc9-e3abbe7461fe_0(5cfc2c7de78ccd59517a2baa8572d619cacdaa1d2208af2a71a27b514a05aa06): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:39:03 crc kubenswrapper[4814]: E1202 14:39:03.179364 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-pf955_openshift-operators_b4635ec6-50f3-4e74-9cc9-e3abbe7461fe_0(5cfc2c7de78ccd59517a2baa8572d619cacdaa1d2208af2a71a27b514a05aa06): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:39:03 crc kubenswrapper[4814]: E1202 14:39:03.179412 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-pf955_openshift-operators(b4635ec6-50f3-4e74-9cc9-e3abbe7461fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-pf955_openshift-operators(b4635ec6-50f3-4e74-9cc9-e3abbe7461fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-pf955_openshift-operators_b4635ec6-50f3-4e74-9cc9-e3abbe7461fe_0(5cfc2c7de78ccd59517a2baa8572d619cacdaa1d2208af2a71a27b514a05aa06): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" podUID="b4635ec6-50f3-4e74-9cc9-e3abbe7461fe" Dec 02 14:39:03 crc kubenswrapper[4814]: E1202 14:39:03.185995 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-zjnzp_openshift-operators_564529e2-c4c3-4a3a-8b02-7dc44894e427_0(5d1ab77f0ea8a6fc6b9ede1947391ade28eed55c9ef8973d5f16c5d83d2821f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:39:03 crc kubenswrapper[4814]: E1202 14:39:03.186099 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-zjnzp_openshift-operators_564529e2-c4c3-4a3a-8b02-7dc44894e427_0(5d1ab77f0ea8a6fc6b9ede1947391ade28eed55c9ef8973d5f16c5d83d2821f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:39:03 crc kubenswrapper[4814]: E1202 14:39:03.186144 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-zjnzp_openshift-operators_564529e2-c4c3-4a3a-8b02-7dc44894e427_0(5d1ab77f0ea8a6fc6b9ede1947391ade28eed55c9ef8973d5f16c5d83d2821f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:39:03 crc kubenswrapper[4814]: E1202 14:39:03.186246 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-zjnzp_openshift-operators(564529e2-c4c3-4a3a-8b02-7dc44894e427)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-zjnzp_openshift-operators(564529e2-c4c3-4a3a-8b02-7dc44894e427)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-zjnzp_openshift-operators_564529e2-c4c3-4a3a-8b02-7dc44894e427_0(5d1ab77f0ea8a6fc6b9ede1947391ade28eed55c9ef8973d5f16c5d83d2821f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" podUID="564529e2-c4c3-4a3a-8b02-7dc44894e427" Dec 02 14:39:05 crc kubenswrapper[4814]: I1202 14:39:05.143668 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:39:05 crc kubenswrapper[4814]: I1202 14:39:05.144318 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:39:05 crc kubenswrapper[4814]: E1202 14:39:05.168452 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators_7b27a473-e399-48d4-b811-564ad42ff316_0(8c4b11c5898f794d96a7868a6c560453cea2d20133c67ed60ff0141b2e3dad94): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:39:05 crc kubenswrapper[4814]: E1202 14:39:05.168558 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators_7b27a473-e399-48d4-b811-564ad42ff316_0(8c4b11c5898f794d96a7868a6c560453cea2d20133c67ed60ff0141b2e3dad94): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:39:05 crc kubenswrapper[4814]: E1202 14:39:05.168585 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators_7b27a473-e399-48d4-b811-564ad42ff316_0(8c4b11c5898f794d96a7868a6c560453cea2d20133c67ed60ff0141b2e3dad94): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:39:05 crc kubenswrapper[4814]: E1202 14:39:05.168641 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators(7b27a473-e399-48d4-b811-564ad42ff316)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators(7b27a473-e399-48d4-b811-564ad42ff316)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_openshift-operators_7b27a473-e399-48d4-b811-564ad42ff316_0(8c4b11c5898f794d96a7868a6c560453cea2d20133c67ed60ff0141b2e3dad94): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" podUID="7b27a473-e399-48d4-b811-564ad42ff316" Dec 02 14:39:07 crc kubenswrapper[4814]: I1202 14:39:07.144418 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:39:07 crc kubenswrapper[4814]: I1202 14:39:07.144996 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:39:07 crc kubenswrapper[4814]: E1202 14:39:07.174743 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators_5f531683-ec7c-4da1-a867-ddc54e0ac5ee_0(c17f3875e046d03e74fcd1198523580291cb3ab4027288679a96a4c836452386): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:39:07 crc kubenswrapper[4814]: E1202 14:39:07.174880 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators_5f531683-ec7c-4da1-a867-ddc54e0ac5ee_0(c17f3875e046d03e74fcd1198523580291cb3ab4027288679a96a4c836452386): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:39:07 crc kubenswrapper[4814]: E1202 14:39:07.175176 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators_5f531683-ec7c-4da1-a867-ddc54e0ac5ee_0(c17f3875e046d03e74fcd1198523580291cb3ab4027288679a96a4c836452386): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:39:07 crc kubenswrapper[4814]: E1202 14:39:07.175296 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators(5f531683-ec7c-4da1-a867-ddc54e0ac5ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators(5f531683-ec7c-4da1-a867-ddc54e0ac5ee)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-jvfff_openshift-operators_5f531683-ec7c-4da1-a867-ddc54e0ac5ee_0(c17f3875e046d03e74fcd1198523580291cb3ab4027288679a96a4c836452386): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" podUID="5f531683-ec7c-4da1-a867-ddc54e0ac5ee" Dec 02 14:39:08 crc kubenswrapper[4814]: I1202 14:39:08.144512 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:39:08 crc kubenswrapper[4814]: I1202 14:39:08.148668 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:39:08 crc kubenswrapper[4814]: E1202 14:39:08.171543 4814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators_7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855_0(204f6ba3cceab2aa948fee6222c67d982f488e42b25179098217582dbe87cfd2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 14:39:08 crc kubenswrapper[4814]: E1202 14:39:08.171730 4814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators_7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855_0(204f6ba3cceab2aa948fee6222c67d982f488e42b25179098217582dbe87cfd2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:39:08 crc kubenswrapper[4814]: E1202 14:39:08.171920 4814 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators_7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855_0(204f6ba3cceab2aa948fee6222c67d982f488e42b25179098217582dbe87cfd2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:39:08 crc kubenswrapper[4814]: E1202 14:39:08.172055 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators(7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators(7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_openshift-operators_7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855_0(204f6ba3cceab2aa948fee6222c67d982f488e42b25179098217582dbe87cfd2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" podUID="7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855" Dec 02 14:39:11 crc kubenswrapper[4814]: I1202 14:39:11.144185 4814 scope.go:117] "RemoveContainer" containerID="848e89bc2046be5579f5aafbad4aee9b0f7d56fa9305f8e6ccd290ab3578b5ef" Dec 02 14:39:11 crc kubenswrapper[4814]: I1202 14:39:11.845375 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd6tj_733ab60d-6038-4a5b-9120-9c4ef1f47774/kube-multus/2.log" Dec 02 14:39:11 crc kubenswrapper[4814]: I1202 14:39:11.845708 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd6tj" event={"ID":"733ab60d-6038-4a5b-9120-9c4ef1f47774","Type":"ContainerStarted","Data":"17b64f5a97c4b8f623263c7cc44cfad26b34b8ca0b6a99570546715541511469"} Dec 02 14:39:14 crc kubenswrapper[4814]: I1202 14:39:14.013578 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-js4sn" Dec 02 14:39:15 crc kubenswrapper[4814]: I1202 14:39:15.143980 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:39:15 crc kubenswrapper[4814]: I1202 14:39:15.144591 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:39:15 crc kubenswrapper[4814]: I1202 14:39:15.351695 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-pf955"] Dec 02 14:39:15 crc kubenswrapper[4814]: I1202 14:39:15.873531 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" event={"ID":"b4635ec6-50f3-4e74-9cc9-e3abbe7461fe","Type":"ContainerStarted","Data":"170c11009fd016815d9e76197a57bf3b6a2df7de7e8884dc8fd9427e0ea86c27"} Dec 02 14:39:17 crc kubenswrapper[4814]: I1202 14:39:17.143846 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:39:17 crc kubenswrapper[4814]: I1202 14:39:17.144644 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" Dec 02 14:39:17 crc kubenswrapper[4814]: I1202 14:39:17.144944 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:39:17 crc kubenswrapper[4814]: I1202 14:39:17.145163 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:39:17 crc kubenswrapper[4814]: I1202 14:39:17.369502 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-zjnzp"] Dec 02 14:39:17 crc kubenswrapper[4814]: I1202 14:39:17.439350 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh"] Dec 02 14:39:17 crc kubenswrapper[4814]: W1202 14:39:17.447341 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b27a473_e399_48d4_b811_564ad42ff316.slice/crio-f5a798d34a58c873bd288ac9f8f57fc17c0b7ca9d3a90745269beda8f8cfdeb0 WatchSource:0}: Error finding container f5a798d34a58c873bd288ac9f8f57fc17c0b7ca9d3a90745269beda8f8cfdeb0: Status 404 returned error can't find the container with id f5a798d34a58c873bd288ac9f8f57fc17c0b7ca9d3a90745269beda8f8cfdeb0 Dec 02 14:39:17 crc kubenswrapper[4814]: I1202 14:39:17.891140 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" event={"ID":"564529e2-c4c3-4a3a-8b02-7dc44894e427","Type":"ContainerStarted","Data":"7d6dc38bdd822b9c0b52c18428d850a3caed8d6d4bb59e66a6b2c8d140f54b64"} Dec 02 14:39:17 crc kubenswrapper[4814]: I1202 14:39:17.892866 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" event={"ID":"7b27a473-e399-48d4-b811-564ad42ff316","Type":"ContainerStarted","Data":"f5a798d34a58c873bd288ac9f8f57fc17c0b7ca9d3a90745269beda8f8cfdeb0"} Dec 02 14:39:22 crc kubenswrapper[4814]: I1202 14:39:22.144080 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:39:22 crc kubenswrapper[4814]: I1202 14:39:22.145755 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" Dec 02 14:39:23 crc kubenswrapper[4814]: I1202 14:39:23.144125 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:39:23 crc kubenswrapper[4814]: I1202 14:39:23.144644 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" Dec 02 14:39:26 crc kubenswrapper[4814]: I1202 14:39:26.538127 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff"] Dec 02 14:39:26 crc kubenswrapper[4814]: I1202 14:39:26.574110 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d"] Dec 02 14:39:26 crc kubenswrapper[4814]: W1202 14:39:26.582047 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a7f89f6_1e09_4ba3_a4f4_c0c81e38a855.slice/crio-5486339aa9c6caf394df5d25d3f521c5c546acb0fbf4b2d107c73a81efd328e2 WatchSource:0}: Error finding container 5486339aa9c6caf394df5d25d3f521c5c546acb0fbf4b2d107c73a81efd328e2: Status 404 returned error can't find the container with id 5486339aa9c6caf394df5d25d3f521c5c546acb0fbf4b2d107c73a81efd328e2 Dec 02 14:39:26 crc kubenswrapper[4814]: I1202 14:39:26.942423 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" event={"ID":"b4635ec6-50f3-4e74-9cc9-e3abbe7461fe","Type":"ContainerStarted","Data":"ef15e52f8630001e97be44c26d739f19b830c041fdbbbd1230100e4d5489e1a7"} Dec 02 14:39:26 crc kubenswrapper[4814]: I1202 14:39:26.942852 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:39:26 crc kubenswrapper[4814]: I1202 14:39:26.944583 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" event={"ID":"7b27a473-e399-48d4-b811-564ad42ff316","Type":"ContainerStarted","Data":"3060cbd38c15af2aea96ab5564c05821cb880a282322741ea98cf5f7848c3c12"} Dec 02 14:39:26 crc kubenswrapper[4814]: I1202 14:39:26.946795 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" event={"ID":"7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855","Type":"ContainerStarted","Data":"573429630e93fcb77e8f565dcdadcc41c47d2fd3eb3cfd99e3f900602d92c182"} Dec 02 14:39:26 crc kubenswrapper[4814]: I1202 14:39:26.946832 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" event={"ID":"7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855","Type":"ContainerStarted","Data":"5486339aa9c6caf394df5d25d3f521c5c546acb0fbf4b2d107c73a81efd328e2"} Dec 02 14:39:26 crc kubenswrapper[4814]: I1202 14:39:26.949009 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" event={"ID":"564529e2-c4c3-4a3a-8b02-7dc44894e427","Type":"ContainerStarted","Data":"c8994145717a913e604fb6cde002557915bc3db67f1653297eefa8ab148627f1"} Dec 02 14:39:26 crc kubenswrapper[4814]: I1202 14:39:26.949575 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:39:26 crc kubenswrapper[4814]: I1202 14:39:26.950734 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" event={"ID":"5f531683-ec7c-4da1-a867-ddc54e0ac5ee","Type":"ContainerStarted","Data":"ab4f9296369d1cd57464305cb712a5d23b87f057a5da7a17251fd37a400bdd90"} Dec 02 14:39:26 crc kubenswrapper[4814]: I1202 14:39:26.972014 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" podStartSLOduration=25.979776184 podStartE2EDuration="36.971993798s" podCreationTimestamp="2025-12-02 14:38:50 +0000 UTC" firstStartedPulling="2025-12-02 14:39:15.390680086 +0000 UTC m=+617.515042290" lastFinishedPulling="2025-12-02 14:39:26.3828977 +0000 UTC m=+628.507259904" observedRunningTime="2025-12-02 14:39:26.966740035 +0000 UTC m=+629.091102239" watchObservedRunningTime="2025-12-02 14:39:26.971993798 +0000 UTC m=+629.096356022" Dec 02 14:39:27 crc kubenswrapper[4814]: I1202 14:39:27.005263 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-pf955" Dec 02 14:39:27 crc kubenswrapper[4814]: I1202 14:39:27.029956 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d" podStartSLOduration=37.029937699 podStartE2EDuration="37.029937699s" podCreationTimestamp="2025-12-02 14:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:39:27.005502613 +0000 UTC m=+629.129864827" watchObservedRunningTime="2025-12-02 14:39:27.029937699 +0000 UTC m=+629.154299923" Dec 02 14:39:27 crc kubenswrapper[4814]: I1202 14:39:27.053502 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh" podStartSLOduration=28.119046391 podStartE2EDuration="37.053475423s" podCreationTimestamp="2025-12-02 14:38:50 +0000 UTC" firstStartedPulling="2025-12-02 14:39:17.44935174 +0000 UTC m=+619.573713944" lastFinishedPulling="2025-12-02 14:39:26.383780772 +0000 UTC m=+628.508142976" observedRunningTime="2025-12-02 14:39:27.050645012 +0000 UTC m=+629.175007216" watchObservedRunningTime="2025-12-02 14:39:27.053475423 +0000 UTC m=+629.177837627" Dec 02 14:39:27 crc kubenswrapper[4814]: I1202 14:39:27.055344 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" podStartSLOduration=28.038256803 podStartE2EDuration="37.05533188s" podCreationTimestamp="2025-12-02 14:38:50 +0000 UTC" firstStartedPulling="2025-12-02 14:39:17.394849695 +0000 UTC m=+619.519211899" lastFinishedPulling="2025-12-02 14:39:26.411924772 +0000 UTC m=+628.536286976" observedRunningTime="2025-12-02 14:39:27.032467033 +0000 UTC m=+629.156829257" watchObservedRunningTime="2025-12-02 14:39:27.05533188 +0000 UTC m=+629.179694084" Dec 02 14:39:30 crc kubenswrapper[4814]: I1202 14:39:30.980305 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" event={"ID":"5f531683-ec7c-4da1-a867-ddc54e0ac5ee","Type":"ContainerStarted","Data":"bf151239a85b980c5fe73ad3c32096e261149ecfaba7ba057827c16ad1294c5c"} Dec 02 14:39:31 crc kubenswrapper[4814]: I1202 14:39:31.008057 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jvfff" podStartSLOduration=37.265221724 podStartE2EDuration="41.008030855s" podCreationTimestamp="2025-12-02 14:38:50 +0000 UTC" firstStartedPulling="2025-12-02 14:39:26.558830457 +0000 UTC m=+628.683192671" lastFinishedPulling="2025-12-02 14:39:30.301639588 +0000 UTC m=+632.426001802" observedRunningTime="2025-12-02 14:39:31.003840929 +0000 UTC m=+633.128203133" watchObservedRunningTime="2025-12-02 14:39:31.008030855 +0000 UTC m=+633.132393079" Dec 02 14:39:35 crc kubenswrapper[4814]: I1202 14:39:35.720295 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2"] Dec 02 14:39:35 crc kubenswrapper[4814]: I1202 14:39:35.721853 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" Dec 02 14:39:35 crc kubenswrapper[4814]: I1202 14:39:35.723990 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 14:39:35 crc kubenswrapper[4814]: I1202 14:39:35.732481 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2"] Dec 02 14:39:35 crc kubenswrapper[4814]: I1202 14:39:35.746495 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/325de51a-d49a-46b9-b0f0-08fbc59525b9-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2\" (UID: \"325de51a-d49a-46b9-b0f0-08fbc59525b9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" Dec 02 14:39:35 crc kubenswrapper[4814]: I1202 14:39:35.746564 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcprl\" (UniqueName: \"kubernetes.io/projected/325de51a-d49a-46b9-b0f0-08fbc59525b9-kube-api-access-bcprl\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2\" (UID: \"325de51a-d49a-46b9-b0f0-08fbc59525b9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" Dec 02 14:39:35 crc kubenswrapper[4814]: I1202 14:39:35.746596 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/325de51a-d49a-46b9-b0f0-08fbc59525b9-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2\" (UID: \"325de51a-d49a-46b9-b0f0-08fbc59525b9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" Dec 02 14:39:35 crc kubenswrapper[4814]: I1202 14:39:35.847701 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/325de51a-d49a-46b9-b0f0-08fbc59525b9-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2\" (UID: \"325de51a-d49a-46b9-b0f0-08fbc59525b9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" Dec 02 14:39:35 crc kubenswrapper[4814]: I1202 14:39:35.847807 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcprl\" (UniqueName: \"kubernetes.io/projected/325de51a-d49a-46b9-b0f0-08fbc59525b9-kube-api-access-bcprl\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2\" (UID: \"325de51a-d49a-46b9-b0f0-08fbc59525b9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" Dec 02 14:39:35 crc kubenswrapper[4814]: I1202 14:39:35.847858 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/325de51a-d49a-46b9-b0f0-08fbc59525b9-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2\" (UID: \"325de51a-d49a-46b9-b0f0-08fbc59525b9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" Dec 02 14:39:35 crc kubenswrapper[4814]: I1202 14:39:35.848659 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/325de51a-d49a-46b9-b0f0-08fbc59525b9-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2\" (UID: \"325de51a-d49a-46b9-b0f0-08fbc59525b9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" Dec 02 14:39:35 crc kubenswrapper[4814]: I1202 14:39:35.848734 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/325de51a-d49a-46b9-b0f0-08fbc59525b9-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2\" (UID: \"325de51a-d49a-46b9-b0f0-08fbc59525b9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" Dec 02 14:39:35 crc kubenswrapper[4814]: I1202 14:39:35.871714 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcprl\" (UniqueName: \"kubernetes.io/projected/325de51a-d49a-46b9-b0f0-08fbc59525b9-kube-api-access-bcprl\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2\" (UID: \"325de51a-d49a-46b9-b0f0-08fbc59525b9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" Dec 02 14:39:36 crc kubenswrapper[4814]: I1202 14:39:36.083582 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" Dec 02 14:39:36 crc kubenswrapper[4814]: I1202 14:39:36.328659 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2"] Dec 02 14:39:36 crc kubenswrapper[4814]: W1202 14:39:36.334089 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod325de51a_d49a_46b9_b0f0_08fbc59525b9.slice/crio-1b881bfe4d15d337dd300e91883a852cf1fde3e07b8c288a17e0affe669fd060 WatchSource:0}: Error finding container 1b881bfe4d15d337dd300e91883a852cf1fde3e07b8c288a17e0affe669fd060: Status 404 returned error can't find the container with id 1b881bfe4d15d337dd300e91883a852cf1fde3e07b8c288a17e0affe669fd060 Dec 02 14:39:37 crc kubenswrapper[4814]: I1202 14:39:37.017450 4814 generic.go:334] "Generic (PLEG): container finished" podID="325de51a-d49a-46b9-b0f0-08fbc59525b9" containerID="ef6f4ccd8b15a10a3044b9c6ac79c130f759e36786a90e5d0bea2a6e96111451" exitCode=0 Dec 02 14:39:37 crc kubenswrapper[4814]: I1202 14:39:37.017509 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" event={"ID":"325de51a-d49a-46b9-b0f0-08fbc59525b9","Type":"ContainerDied","Data":"ef6f4ccd8b15a10a3044b9c6ac79c130f759e36786a90e5d0bea2a6e96111451"} Dec 02 14:39:37 crc kubenswrapper[4814]: I1202 14:39:37.017827 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" event={"ID":"325de51a-d49a-46b9-b0f0-08fbc59525b9","Type":"ContainerStarted","Data":"1b881bfe4d15d337dd300e91883a852cf1fde3e07b8c288a17e0affe669fd060"} Dec 02 14:39:39 crc kubenswrapper[4814]: I1202 14:39:39.033459 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" event={"ID":"325de51a-d49a-46b9-b0f0-08fbc59525b9","Type":"ContainerStarted","Data":"68a8c63adc646f479c6290c9821af7fc9c054375baf10a6db158768038fdbb3d"} Dec 02 14:39:40 crc kubenswrapper[4814]: I1202 14:39:40.041332 4814 generic.go:334] "Generic (PLEG): container finished" podID="325de51a-d49a-46b9-b0f0-08fbc59525b9" containerID="68a8c63adc646f479c6290c9821af7fc9c054375baf10a6db158768038fdbb3d" exitCode=0 Dec 02 14:39:40 crc kubenswrapper[4814]: I1202 14:39:40.041388 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" event={"ID":"325de51a-d49a-46b9-b0f0-08fbc59525b9","Type":"ContainerDied","Data":"68a8c63adc646f479c6290c9821af7fc9c054375baf10a6db158768038fdbb3d"} Dec 02 14:39:40 crc kubenswrapper[4814]: I1202 14:39:40.890846 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-zjnzp" Dec 02 14:39:41 crc kubenswrapper[4814]: I1202 14:39:41.054358 4814 generic.go:334] "Generic (PLEG): container finished" podID="325de51a-d49a-46b9-b0f0-08fbc59525b9" containerID="0a789fb78500cea335d77d6fdc647165487595526cd73bee29a5dae1e983668a" exitCode=0 Dec 02 14:39:41 crc kubenswrapper[4814]: I1202 14:39:41.054448 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" event={"ID":"325de51a-d49a-46b9-b0f0-08fbc59525b9","Type":"ContainerDied","Data":"0a789fb78500cea335d77d6fdc647165487595526cd73bee29a5dae1e983668a"} Dec 02 14:39:42 crc kubenswrapper[4814]: I1202 14:39:42.340787 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" Dec 02 14:39:42 crc kubenswrapper[4814]: I1202 14:39:42.445367 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/325de51a-d49a-46b9-b0f0-08fbc59525b9-util\") pod \"325de51a-d49a-46b9-b0f0-08fbc59525b9\" (UID: \"325de51a-d49a-46b9-b0f0-08fbc59525b9\") " Dec 02 14:39:42 crc kubenswrapper[4814]: I1202 14:39:42.445489 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcprl\" (UniqueName: \"kubernetes.io/projected/325de51a-d49a-46b9-b0f0-08fbc59525b9-kube-api-access-bcprl\") pod \"325de51a-d49a-46b9-b0f0-08fbc59525b9\" (UID: \"325de51a-d49a-46b9-b0f0-08fbc59525b9\") " Dec 02 14:39:42 crc kubenswrapper[4814]: I1202 14:39:42.445532 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/325de51a-d49a-46b9-b0f0-08fbc59525b9-bundle\") pod \"325de51a-d49a-46b9-b0f0-08fbc59525b9\" (UID: \"325de51a-d49a-46b9-b0f0-08fbc59525b9\") " Dec 02 14:39:42 crc kubenswrapper[4814]: I1202 14:39:42.446013 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/325de51a-d49a-46b9-b0f0-08fbc59525b9-bundle" (OuterVolumeSpecName: "bundle") pod "325de51a-d49a-46b9-b0f0-08fbc59525b9" (UID: "325de51a-d49a-46b9-b0f0-08fbc59525b9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:39:42 crc kubenswrapper[4814]: I1202 14:39:42.454533 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/325de51a-d49a-46b9-b0f0-08fbc59525b9-kube-api-access-bcprl" (OuterVolumeSpecName: "kube-api-access-bcprl") pod "325de51a-d49a-46b9-b0f0-08fbc59525b9" (UID: "325de51a-d49a-46b9-b0f0-08fbc59525b9"). InnerVolumeSpecName "kube-api-access-bcprl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:39:42 crc kubenswrapper[4814]: I1202 14:39:42.470376 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/325de51a-d49a-46b9-b0f0-08fbc59525b9-util" (OuterVolumeSpecName: "util") pod "325de51a-d49a-46b9-b0f0-08fbc59525b9" (UID: "325de51a-d49a-46b9-b0f0-08fbc59525b9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:39:42 crc kubenswrapper[4814]: I1202 14:39:42.547567 4814 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/325de51a-d49a-46b9-b0f0-08fbc59525b9-util\") on node \"crc\" DevicePath \"\"" Dec 02 14:39:42 crc kubenswrapper[4814]: I1202 14:39:42.547623 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcprl\" (UniqueName: \"kubernetes.io/projected/325de51a-d49a-46b9-b0f0-08fbc59525b9-kube-api-access-bcprl\") on node \"crc\" DevicePath \"\"" Dec 02 14:39:42 crc kubenswrapper[4814]: I1202 14:39:42.547639 4814 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/325de51a-d49a-46b9-b0f0-08fbc59525b9-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:39:43 crc kubenswrapper[4814]: I1202 14:39:43.069185 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" event={"ID":"325de51a-d49a-46b9-b0f0-08fbc59525b9","Type":"ContainerDied","Data":"1b881bfe4d15d337dd300e91883a852cf1fde3e07b8c288a17e0affe669fd060"} Dec 02 14:39:43 crc kubenswrapper[4814]: I1202 14:39:43.069250 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b881bfe4d15d337dd300e91883a852cf1fde3e07b8c288a17e0affe669fd060" Dec 02 14:39:43 crc kubenswrapper[4814]: I1202 14:39:43.069326 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2" Dec 02 14:39:47 crc kubenswrapper[4814]: I1202 14:39:47.633598 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-dbljk"] Dec 02 14:39:47 crc kubenswrapper[4814]: E1202 14:39:47.634375 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="325de51a-d49a-46b9-b0f0-08fbc59525b9" containerName="util" Dec 02 14:39:47 crc kubenswrapper[4814]: I1202 14:39:47.634393 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="325de51a-d49a-46b9-b0f0-08fbc59525b9" containerName="util" Dec 02 14:39:47 crc kubenswrapper[4814]: E1202 14:39:47.634418 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="325de51a-d49a-46b9-b0f0-08fbc59525b9" containerName="pull" Dec 02 14:39:47 crc kubenswrapper[4814]: I1202 14:39:47.634425 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="325de51a-d49a-46b9-b0f0-08fbc59525b9" containerName="pull" Dec 02 14:39:47 crc kubenswrapper[4814]: E1202 14:39:47.634439 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="325de51a-d49a-46b9-b0f0-08fbc59525b9" containerName="extract" Dec 02 14:39:47 crc kubenswrapper[4814]: I1202 14:39:47.634446 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="325de51a-d49a-46b9-b0f0-08fbc59525b9" containerName="extract" Dec 02 14:39:47 crc kubenswrapper[4814]: I1202 14:39:47.634566 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="325de51a-d49a-46b9-b0f0-08fbc59525b9" containerName="extract" Dec 02 14:39:47 crc kubenswrapper[4814]: I1202 14:39:47.635113 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-dbljk" Dec 02 14:39:47 crc kubenswrapper[4814]: I1202 14:39:47.637670 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 02 14:39:47 crc kubenswrapper[4814]: I1202 14:39:47.637773 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-fp58t" Dec 02 14:39:47 crc kubenswrapper[4814]: I1202 14:39:47.638555 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 02 14:39:47 crc kubenswrapper[4814]: I1202 14:39:47.649240 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-dbljk"] Dec 02 14:39:47 crc kubenswrapper[4814]: I1202 14:39:47.812816 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szrkg\" (UniqueName: \"kubernetes.io/projected/eabb27e3-5307-4104-816f-fcfc906f2c73-kube-api-access-szrkg\") pod \"nmstate-operator-5b5b58f5c8-dbljk\" (UID: \"eabb27e3-5307-4104-816f-fcfc906f2c73\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-dbljk" Dec 02 14:39:47 crc kubenswrapper[4814]: I1202 14:39:47.913890 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szrkg\" (UniqueName: \"kubernetes.io/projected/eabb27e3-5307-4104-816f-fcfc906f2c73-kube-api-access-szrkg\") pod \"nmstate-operator-5b5b58f5c8-dbljk\" (UID: \"eabb27e3-5307-4104-816f-fcfc906f2c73\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-dbljk" Dec 02 14:39:47 crc kubenswrapper[4814]: I1202 14:39:47.935960 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szrkg\" (UniqueName: \"kubernetes.io/projected/eabb27e3-5307-4104-816f-fcfc906f2c73-kube-api-access-szrkg\") pod \"nmstate-operator-5b5b58f5c8-dbljk\" (UID: \"eabb27e3-5307-4104-816f-fcfc906f2c73\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-dbljk" Dec 02 14:39:47 crc kubenswrapper[4814]: I1202 14:39:47.956205 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-dbljk" Dec 02 14:39:48 crc kubenswrapper[4814]: I1202 14:39:48.444804 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-dbljk"] Dec 02 14:39:49 crc kubenswrapper[4814]: I1202 14:39:49.101341 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-dbljk" event={"ID":"eabb27e3-5307-4104-816f-fcfc906f2c73","Type":"ContainerStarted","Data":"3c7b2f88d556eb26297ad8c7c40e6a9f47e08be1a43b001538796e120b6e4c80"} Dec 02 14:39:53 crc kubenswrapper[4814]: I1202 14:39:53.128657 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-dbljk" event={"ID":"eabb27e3-5307-4104-816f-fcfc906f2c73","Type":"ContainerStarted","Data":"530d73d312da1d47c5aa508ff114b5da5ec77b8d61d20733efc0cffdce092b19"} Dec 02 14:39:53 crc kubenswrapper[4814]: I1202 14:39:53.158677 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-dbljk" podStartSLOduration=2.385084825 podStartE2EDuration="6.15864868s" podCreationTimestamp="2025-12-02 14:39:47 +0000 UTC" firstStartedPulling="2025-12-02 14:39:48.45977718 +0000 UTC m=+650.584139384" lastFinishedPulling="2025-12-02 14:39:52.233341035 +0000 UTC m=+654.357703239" observedRunningTime="2025-12-02 14:39:53.150429077 +0000 UTC m=+655.274791301" watchObservedRunningTime="2025-12-02 14:39:53.15864868 +0000 UTC m=+655.283010934" Dec 02 14:39:56 crc kubenswrapper[4814]: I1202 14:39:56.962272 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-q6vmr"] Dec 02 14:39:56 crc kubenswrapper[4814]: I1202 14:39:56.964254 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-q6vmr" Dec 02 14:39:56 crc kubenswrapper[4814]: I1202 14:39:56.966173 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mkrqf" Dec 02 14:39:56 crc kubenswrapper[4814]: I1202 14:39:56.967885 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc"] Dec 02 14:39:56 crc kubenswrapper[4814]: I1202 14:39:56.968981 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc" Dec 02 14:39:56 crc kubenswrapper[4814]: I1202 14:39:56.972124 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 02 14:39:56 crc kubenswrapper[4814]: I1202 14:39:56.977761 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc"] Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.026403 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-q6vmr"] Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.047760 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5ppl\" (UniqueName: \"kubernetes.io/projected/448d2f06-663c-4836-8848-52113dbc856f-kube-api-access-f5ppl\") pod \"nmstate-webhook-5f6d4c5ccb-nxjcc\" (UID: \"448d2f06-663c-4836-8848-52113dbc856f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.047857 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8nm2\" (UniqueName: \"kubernetes.io/projected/1c57a5d5-c718-404d-a1a4-0ae448d422aa-kube-api-access-s8nm2\") pod \"nmstate-metrics-7f946cbc9-q6vmr\" (UID: \"1c57a5d5-c718-404d-a1a4-0ae448d422aa\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-q6vmr" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.047925 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/448d2f06-663c-4836-8848-52113dbc856f-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-nxjcc\" (UID: \"448d2f06-663c-4836-8848-52113dbc856f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.058306 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8vx6h"] Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.058974 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.149322 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgnmw\" (UniqueName: \"kubernetes.io/projected/5965df3a-80a1-4c60-83ec-e1d93b61a489-kube-api-access-tgnmw\") pod \"nmstate-handler-8vx6h\" (UID: \"5965df3a-80a1-4c60-83ec-e1d93b61a489\") " pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.149368 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5965df3a-80a1-4c60-83ec-e1d93b61a489-ovs-socket\") pod \"nmstate-handler-8vx6h\" (UID: \"5965df3a-80a1-4c60-83ec-e1d93b61a489\") " pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.149416 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/448d2f06-663c-4836-8848-52113dbc856f-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-nxjcc\" (UID: \"448d2f06-663c-4836-8848-52113dbc856f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc" Dec 02 14:39:57 crc kubenswrapper[4814]: E1202 14:39:57.149499 4814 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 02 14:39:57 crc kubenswrapper[4814]: E1202 14:39:57.149546 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/448d2f06-663c-4836-8848-52113dbc856f-tls-key-pair podName:448d2f06-663c-4836-8848-52113dbc856f nodeName:}" failed. No retries permitted until 2025-12-02 14:39:57.64952679 +0000 UTC m=+659.773888994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/448d2f06-663c-4836-8848-52113dbc856f-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-nxjcc" (UID: "448d2f06-663c-4836-8848-52113dbc856f") : secret "openshift-nmstate-webhook" not found Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.149568 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5ppl\" (UniqueName: \"kubernetes.io/projected/448d2f06-663c-4836-8848-52113dbc856f-kube-api-access-f5ppl\") pod \"nmstate-webhook-5f6d4c5ccb-nxjcc\" (UID: \"448d2f06-663c-4836-8848-52113dbc856f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.149597 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5965df3a-80a1-4c60-83ec-e1d93b61a489-nmstate-lock\") pod \"nmstate-handler-8vx6h\" (UID: \"5965df3a-80a1-4c60-83ec-e1d93b61a489\") " pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.149632 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5965df3a-80a1-4c60-83ec-e1d93b61a489-dbus-socket\") pod \"nmstate-handler-8vx6h\" (UID: \"5965df3a-80a1-4c60-83ec-e1d93b61a489\") " pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.149682 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8nm2\" (UniqueName: \"kubernetes.io/projected/1c57a5d5-c718-404d-a1a4-0ae448d422aa-kube-api-access-s8nm2\") pod \"nmstate-metrics-7f946cbc9-q6vmr\" (UID: \"1c57a5d5-c718-404d-a1a4-0ae448d422aa\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-q6vmr" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.155819 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb"] Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.156510 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.163426 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.163815 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-r4mh5" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.164489 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.169019 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb"] Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.177024 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5ppl\" (UniqueName: \"kubernetes.io/projected/448d2f06-663c-4836-8848-52113dbc856f-kube-api-access-f5ppl\") pod \"nmstate-webhook-5f6d4c5ccb-nxjcc\" (UID: \"448d2f06-663c-4836-8848-52113dbc856f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.182004 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8nm2\" (UniqueName: \"kubernetes.io/projected/1c57a5d5-c718-404d-a1a4-0ae448d422aa-kube-api-access-s8nm2\") pod \"nmstate-metrics-7f946cbc9-q6vmr\" (UID: \"1c57a5d5-c718-404d-a1a4-0ae448d422aa\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-q6vmr" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.251030 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgnmw\" (UniqueName: \"kubernetes.io/projected/5965df3a-80a1-4c60-83ec-e1d93b61a489-kube-api-access-tgnmw\") pod \"nmstate-handler-8vx6h\" (UID: \"5965df3a-80a1-4c60-83ec-e1d93b61a489\") " pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.251078 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l7lp\" (UniqueName: \"kubernetes.io/projected/73a87257-56e2-4465-ba71-524ec1614ef9-kube-api-access-2l7lp\") pod \"nmstate-console-plugin-7fbb5f6569-d9txb\" (UID: \"73a87257-56e2-4465-ba71-524ec1614ef9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.251101 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/73a87257-56e2-4465-ba71-524ec1614ef9-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-d9txb\" (UID: \"73a87257-56e2-4465-ba71-524ec1614ef9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.251126 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5965df3a-80a1-4c60-83ec-e1d93b61a489-ovs-socket\") pod \"nmstate-handler-8vx6h\" (UID: \"5965df3a-80a1-4c60-83ec-e1d93b61a489\") " pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.251161 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/73a87257-56e2-4465-ba71-524ec1614ef9-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-d9txb\" (UID: \"73a87257-56e2-4465-ba71-524ec1614ef9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.251227 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5965df3a-80a1-4c60-83ec-e1d93b61a489-ovs-socket\") pod \"nmstate-handler-8vx6h\" (UID: \"5965df3a-80a1-4c60-83ec-e1d93b61a489\") " pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.251390 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5965df3a-80a1-4c60-83ec-e1d93b61a489-nmstate-lock\") pod \"nmstate-handler-8vx6h\" (UID: \"5965df3a-80a1-4c60-83ec-e1d93b61a489\") " pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.251471 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5965df3a-80a1-4c60-83ec-e1d93b61a489-dbus-socket\") pod \"nmstate-handler-8vx6h\" (UID: \"5965df3a-80a1-4c60-83ec-e1d93b61a489\") " pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.251523 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5965df3a-80a1-4c60-83ec-e1d93b61a489-nmstate-lock\") pod \"nmstate-handler-8vx6h\" (UID: \"5965df3a-80a1-4c60-83ec-e1d93b61a489\") " pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.251815 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5965df3a-80a1-4c60-83ec-e1d93b61a489-dbus-socket\") pod \"nmstate-handler-8vx6h\" (UID: \"5965df3a-80a1-4c60-83ec-e1d93b61a489\") " pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.276678 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgnmw\" (UniqueName: \"kubernetes.io/projected/5965df3a-80a1-4c60-83ec-e1d93b61a489-kube-api-access-tgnmw\") pod \"nmstate-handler-8vx6h\" (UID: \"5965df3a-80a1-4c60-83ec-e1d93b61a489\") " pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.339825 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-q6vmr" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.352754 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l7lp\" (UniqueName: \"kubernetes.io/projected/73a87257-56e2-4465-ba71-524ec1614ef9-kube-api-access-2l7lp\") pod \"nmstate-console-plugin-7fbb5f6569-d9txb\" (UID: \"73a87257-56e2-4465-ba71-524ec1614ef9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.352815 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/73a87257-56e2-4465-ba71-524ec1614ef9-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-d9txb\" (UID: \"73a87257-56e2-4465-ba71-524ec1614ef9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.352879 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/73a87257-56e2-4465-ba71-524ec1614ef9-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-d9txb\" (UID: \"73a87257-56e2-4465-ba71-524ec1614ef9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.354395 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/73a87257-56e2-4465-ba71-524ec1614ef9-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-d9txb\" (UID: \"73a87257-56e2-4465-ba71-524ec1614ef9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.357910 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/73a87257-56e2-4465-ba71-524ec1614ef9-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-d9txb\" (UID: \"73a87257-56e2-4465-ba71-524ec1614ef9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.373171 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.380058 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7f988dc77-nvn6q"] Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.380922 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.391328 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l7lp\" (UniqueName: \"kubernetes.io/projected/73a87257-56e2-4465-ba71-524ec1614ef9-kube-api-access-2l7lp\") pod \"nmstate-console-plugin-7fbb5f6569-d9txb\" (UID: \"73a87257-56e2-4465-ba71-524ec1614ef9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.395303 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f988dc77-nvn6q"] Dec 02 14:39:57 crc kubenswrapper[4814]: W1202 14:39:57.432326 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5965df3a_80a1_4c60_83ec_e1d93b61a489.slice/crio-6d76374108bd4ae06835ca8488aabbf617de080459f3febec2546d9200246fd3 WatchSource:0}: Error finding container 6d76374108bd4ae06835ca8488aabbf617de080459f3febec2546d9200246fd3: Status 404 returned error can't find the container with id 6d76374108bd4ae06835ca8488aabbf617de080459f3febec2546d9200246fd3 Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.454078 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-serving-cert\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.454143 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-oauth-config\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.454180 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-trusted-ca-bundle\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.454490 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-config\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.454617 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-service-ca\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.454853 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp5lz\" (UniqueName: \"kubernetes.io/projected/1d3a481c-e450-4d5b-a5a2-54eac6642566-kube-api-access-hp5lz\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.454933 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-oauth-serving-cert\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.470799 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.556007 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-serving-cert\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.556055 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-oauth-config\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.556081 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-trusted-ca-bundle\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.556105 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-config\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.556134 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-service-ca\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.556172 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp5lz\" (UniqueName: \"kubernetes.io/projected/1d3a481c-e450-4d5b-a5a2-54eac6642566-kube-api-access-hp5lz\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.556195 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-oauth-serving-cert\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.557597 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-oauth-serving-cert\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.557613 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-config\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.558056 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-trusted-ca-bundle\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.559089 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-service-ca\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.560280 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-serving-cert\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.561039 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-oauth-config\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.576045 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp5lz\" (UniqueName: \"kubernetes.io/projected/1d3a481c-e450-4d5b-a5a2-54eac6642566-kube-api-access-hp5lz\") pod \"console-7f988dc77-nvn6q\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.605888 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-q6vmr"] Dec 02 14:39:57 crc kubenswrapper[4814]: W1202 14:39:57.611647 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c57a5d5_c718_404d_a1a4_0ae448d422aa.slice/crio-89047e2bfb56bb5409812cdc4f2f26ac96abe9e2aa6cb2f6f20b7264ab5c428a WatchSource:0}: Error finding container 89047e2bfb56bb5409812cdc4f2f26ac96abe9e2aa6cb2f6f20b7264ab5c428a: Status 404 returned error can't find the container with id 89047e2bfb56bb5409812cdc4f2f26ac96abe9e2aa6cb2f6f20b7264ab5c428a Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.657456 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/448d2f06-663c-4836-8848-52113dbc856f-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-nxjcc\" (UID: \"448d2f06-663c-4836-8848-52113dbc856f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.660162 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/448d2f06-663c-4836-8848-52113dbc856f-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-nxjcc\" (UID: \"448d2f06-663c-4836-8848-52113dbc856f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc" Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.675700 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb"] Dec 02 14:39:57 crc kubenswrapper[4814]: W1202 14:39:57.679855 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73a87257_56e2_4465_ba71_524ec1614ef9.slice/crio-eaa6712f793c67d0876bc1aef481b19e20205e270f3ba95fae1905f0e1040af3 WatchSource:0}: Error finding container eaa6712f793c67d0876bc1aef481b19e20205e270f3ba95fae1905f0e1040af3: Status 404 returned error can't find the container with id eaa6712f793c67d0876bc1aef481b19e20205e270f3ba95fae1905f0e1040af3 Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.710384 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:39:57 crc kubenswrapper[4814]: W1202 14:39:57.917974 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3a481c_e450_4d5b_a5a2_54eac6642566.slice/crio-4899c59757c53087c153920ab661c614d78f49c45e65824482e9e07dc3fc8b90 WatchSource:0}: Error finding container 4899c59757c53087c153920ab661c614d78f49c45e65824482e9e07dc3fc8b90: Status 404 returned error can't find the container with id 4899c59757c53087c153920ab661c614d78f49c45e65824482e9e07dc3fc8b90 Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.919781 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f988dc77-nvn6q"] Dec 02 14:39:57 crc kubenswrapper[4814]: I1202 14:39:57.952166 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc" Dec 02 14:39:58 crc kubenswrapper[4814]: I1202 14:39:58.162845 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb" event={"ID":"73a87257-56e2-4465-ba71-524ec1614ef9","Type":"ContainerStarted","Data":"eaa6712f793c67d0876bc1aef481b19e20205e270f3ba95fae1905f0e1040af3"} Dec 02 14:39:58 crc kubenswrapper[4814]: I1202 14:39:58.168986 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f988dc77-nvn6q" event={"ID":"1d3a481c-e450-4d5b-a5a2-54eac6642566","Type":"ContainerStarted","Data":"4899c59757c53087c153920ab661c614d78f49c45e65824482e9e07dc3fc8b90"} Dec 02 14:39:58 crc kubenswrapper[4814]: I1202 14:39:58.174210 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8vx6h" event={"ID":"5965df3a-80a1-4c60-83ec-e1d93b61a489","Type":"ContainerStarted","Data":"6d76374108bd4ae06835ca8488aabbf617de080459f3febec2546d9200246fd3"} Dec 02 14:39:58 crc kubenswrapper[4814]: I1202 14:39:58.176377 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-q6vmr" event={"ID":"1c57a5d5-c718-404d-a1a4-0ae448d422aa","Type":"ContainerStarted","Data":"89047e2bfb56bb5409812cdc4f2f26ac96abe9e2aa6cb2f6f20b7264ab5c428a"} Dec 02 14:39:58 crc kubenswrapper[4814]: I1202 14:39:58.464776 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc"] Dec 02 14:39:58 crc kubenswrapper[4814]: W1202 14:39:58.474059 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod448d2f06_663c_4836_8848_52113dbc856f.slice/crio-b0680362ca42317f8474783a28b81273cb7c9a125c0709d5c647e8f2ee298bdf WatchSource:0}: Error finding container b0680362ca42317f8474783a28b81273cb7c9a125c0709d5c647e8f2ee298bdf: Status 404 returned error can't find the container with id b0680362ca42317f8474783a28b81273cb7c9a125c0709d5c647e8f2ee298bdf Dec 02 14:39:59 crc kubenswrapper[4814]: I1202 14:39:59.184019 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc" event={"ID":"448d2f06-663c-4836-8848-52113dbc856f","Type":"ContainerStarted","Data":"b0680362ca42317f8474783a28b81273cb7c9a125c0709d5c647e8f2ee298bdf"} Dec 02 14:39:59 crc kubenswrapper[4814]: I1202 14:39:59.185817 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f988dc77-nvn6q" event={"ID":"1d3a481c-e450-4d5b-a5a2-54eac6642566","Type":"ContainerStarted","Data":"151c14fe5549b053799dadc161fbd4771d2a94df0d80d0cad069556cf76636ff"} Dec 02 14:39:59 crc kubenswrapper[4814]: I1202 14:39:59.211132 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7f988dc77-nvn6q" podStartSLOduration=2.2111114880000002 podStartE2EDuration="2.211111488s" podCreationTimestamp="2025-12-02 14:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:39:59.206618267 +0000 UTC m=+661.330980471" watchObservedRunningTime="2025-12-02 14:39:59.211111488 +0000 UTC m=+661.335473692" Dec 02 14:40:01 crc kubenswrapper[4814]: I1202 14:40:01.198582 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8vx6h" event={"ID":"5965df3a-80a1-4c60-83ec-e1d93b61a489","Type":"ContainerStarted","Data":"880336e981d79b2c8097f5d2018c8cd26d87dc2b2c2b1f165202f31f965aa063"} Dec 02 14:40:01 crc kubenswrapper[4814]: I1202 14:40:01.198895 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:40:01 crc kubenswrapper[4814]: I1202 14:40:01.199874 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-q6vmr" event={"ID":"1c57a5d5-c718-404d-a1a4-0ae448d422aa","Type":"ContainerStarted","Data":"dff2e5744730a05e213cdad8acebe457a29e2080f313ad641da3a792294f1b14"} Dec 02 14:40:01 crc kubenswrapper[4814]: I1202 14:40:01.202078 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc" event={"ID":"448d2f06-663c-4836-8848-52113dbc856f","Type":"ContainerStarted","Data":"f7b96befa561c40b93febf8bd24fe80c96d4f88ce617ff74106affb934194734"} Dec 02 14:40:01 crc kubenswrapper[4814]: I1202 14:40:01.202351 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc" Dec 02 14:40:01 crc kubenswrapper[4814]: I1202 14:40:01.203358 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb" event={"ID":"73a87257-56e2-4465-ba71-524ec1614ef9","Type":"ContainerStarted","Data":"0279a3188753cd1a746266dcd0cd79d0b86a37e82ac8e0be81c526d5bf08eab5"} Dec 02 14:40:01 crc kubenswrapper[4814]: I1202 14:40:01.221079 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8vx6h" podStartSLOduration=2.034568751 podStartE2EDuration="5.221063852s" podCreationTimestamp="2025-12-02 14:39:56 +0000 UTC" firstStartedPulling="2025-12-02 14:39:57.437376347 +0000 UTC m=+659.561738541" lastFinishedPulling="2025-12-02 14:40:00.623871438 +0000 UTC m=+662.748233642" observedRunningTime="2025-12-02 14:40:01.218781766 +0000 UTC m=+663.343143990" watchObservedRunningTime="2025-12-02 14:40:01.221063852 +0000 UTC m=+663.345426056" Dec 02 14:40:01 crc kubenswrapper[4814]: I1202 14:40:01.239602 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc" podStartSLOduration=3.075220484 podStartE2EDuration="5.239584199s" podCreationTimestamp="2025-12-02 14:39:56 +0000 UTC" firstStartedPulling="2025-12-02 14:39:58.476158654 +0000 UTC m=+660.600520858" lastFinishedPulling="2025-12-02 14:40:00.640522369 +0000 UTC m=+662.764884573" observedRunningTime="2025-12-02 14:40:01.236150894 +0000 UTC m=+663.360513108" watchObservedRunningTime="2025-12-02 14:40:01.239584199 +0000 UTC m=+663.363946403" Dec 02 14:40:01 crc kubenswrapper[4814]: I1202 14:40:01.257721 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-d9txb" podStartSLOduration=1.329156855 podStartE2EDuration="4.257705517s" podCreationTimestamp="2025-12-02 14:39:57 +0000 UTC" firstStartedPulling="2025-12-02 14:39:57.681944515 +0000 UTC m=+659.806306719" lastFinishedPulling="2025-12-02 14:40:00.610493167 +0000 UTC m=+662.734855381" observedRunningTime="2025-12-02 14:40:01.255244586 +0000 UTC m=+663.379606790" watchObservedRunningTime="2025-12-02 14:40:01.257705517 +0000 UTC m=+663.382067721" Dec 02 14:40:04 crc kubenswrapper[4814]: I1202 14:40:04.228119 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-q6vmr" event={"ID":"1c57a5d5-c718-404d-a1a4-0ae448d422aa","Type":"ContainerStarted","Data":"9574de0dbaeab7cb2a4c3d901abdd74712660c36af5c475eeda76b166cc654f8"} Dec 02 14:40:04 crc kubenswrapper[4814]: I1202 14:40:04.254538 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-q6vmr" podStartSLOduration=1.895868087 podStartE2EDuration="8.254509444s" podCreationTimestamp="2025-12-02 14:39:56 +0000 UTC" firstStartedPulling="2025-12-02 14:39:57.613570867 +0000 UTC m=+659.737933071" lastFinishedPulling="2025-12-02 14:40:03.972212194 +0000 UTC m=+666.096574428" observedRunningTime="2025-12-02 14:40:04.250501275 +0000 UTC m=+666.374863489" watchObservedRunningTime="2025-12-02 14:40:04.254509444 +0000 UTC m=+666.378871648" Dec 02 14:40:07 crc kubenswrapper[4814]: I1202 14:40:07.402915 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8vx6h" Dec 02 14:40:07 crc kubenswrapper[4814]: I1202 14:40:07.711655 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:40:07 crc kubenswrapper[4814]: I1202 14:40:07.712433 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:40:07 crc kubenswrapper[4814]: I1202 14:40:07.719528 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:40:08 crc kubenswrapper[4814]: I1202 14:40:08.262465 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:40:08 crc kubenswrapper[4814]: I1202 14:40:08.342547 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-794sp"] Dec 02 14:40:17 crc kubenswrapper[4814]: I1202 14:40:17.959984 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-nxjcc" Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.058720 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4"] Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.061100 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.063106 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.073033 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4"] Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.210343 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99e6fcf2-2bab-4258-820c-09748fd9a378-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4\" (UID: \"99e6fcf2-2bab-4258-820c-09748fd9a378\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.210392 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhtb2\" (UniqueName: \"kubernetes.io/projected/99e6fcf2-2bab-4258-820c-09748fd9a378-kube-api-access-hhtb2\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4\" (UID: \"99e6fcf2-2bab-4258-820c-09748fd9a378\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.210461 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99e6fcf2-2bab-4258-820c-09748fd9a378-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4\" (UID: \"99e6fcf2-2bab-4258-820c-09748fd9a378\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.311734 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99e6fcf2-2bab-4258-820c-09748fd9a378-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4\" (UID: \"99e6fcf2-2bab-4258-820c-09748fd9a378\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.311891 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99e6fcf2-2bab-4258-820c-09748fd9a378-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4\" (UID: \"99e6fcf2-2bab-4258-820c-09748fd9a378\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.311949 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhtb2\" (UniqueName: \"kubernetes.io/projected/99e6fcf2-2bab-4258-820c-09748fd9a378-kube-api-access-hhtb2\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4\" (UID: \"99e6fcf2-2bab-4258-820c-09748fd9a378\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.313248 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99e6fcf2-2bab-4258-820c-09748fd9a378-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4\" (UID: \"99e6fcf2-2bab-4258-820c-09748fd9a378\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.313383 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99e6fcf2-2bab-4258-820c-09748fd9a378-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4\" (UID: \"99e6fcf2-2bab-4258-820c-09748fd9a378\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.334180 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhtb2\" (UniqueName: \"kubernetes.io/projected/99e6fcf2-2bab-4258-820c-09748fd9a378-kube-api-access-hhtb2\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4\" (UID: \"99e6fcf2-2bab-4258-820c-09748fd9a378\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.419952 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" Dec 02 14:40:32 crc kubenswrapper[4814]: I1202 14:40:32.650554 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4"] Dec 02 14:40:32 crc kubenswrapper[4814]: W1202 14:40:32.655652 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99e6fcf2_2bab_4258_820c_09748fd9a378.slice/crio-1724e15d0367ae89c30b85d7e077464d23c16284863c044dcac46b5ae21d5594 WatchSource:0}: Error finding container 1724e15d0367ae89c30b85d7e077464d23c16284863c044dcac46b5ae21d5594: Status 404 returned error can't find the container with id 1724e15d0367ae89c30b85d7e077464d23c16284863c044dcac46b5ae21d5594 Dec 02 14:40:33 crc kubenswrapper[4814]: I1202 14:40:33.404420 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-794sp" podUID="9fa2a78f-22e9-4a5a-9224-aa9db469d17a" containerName="console" containerID="cri-o://ada0d7617efd00cdedf1dd8054cb7203e6c32da8e2dedf2ac7a94b118b302261" gracePeriod=15 Dec 02 14:40:33 crc kubenswrapper[4814]: I1202 14:40:33.454601 4814 generic.go:334] "Generic (PLEG): container finished" podID="99e6fcf2-2bab-4258-820c-09748fd9a378" containerID="7605f4f626838596093a01dbc2b9cdc43b3c852c300b4b011994a6500d128b3b" exitCode=0 Dec 02 14:40:33 crc kubenswrapper[4814]: I1202 14:40:33.454670 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" event={"ID":"99e6fcf2-2bab-4258-820c-09748fd9a378","Type":"ContainerDied","Data":"7605f4f626838596093a01dbc2b9cdc43b3c852c300b4b011994a6500d128b3b"} Dec 02 14:40:33 crc kubenswrapper[4814]: I1202 14:40:33.454748 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" event={"ID":"99e6fcf2-2bab-4258-820c-09748fd9a378","Type":"ContainerStarted","Data":"1724e15d0367ae89c30b85d7e077464d23c16284863c044dcac46b5ae21d5594"} Dec 02 14:40:33 crc kubenswrapper[4814]: I1202 14:40:33.911387 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-794sp_9fa2a78f-22e9-4a5a-9224-aa9db469d17a/console/0.log" Dec 02 14:40:33 crc kubenswrapper[4814]: I1202 14:40:33.911598 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.037999 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-service-ca\") pod \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.038276 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-trusted-ca-bundle\") pod \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.038404 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-oauth-serving-cert\") pod \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.038488 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-config\") pod \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.038579 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6l6j\" (UniqueName: \"kubernetes.io/projected/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-kube-api-access-h6l6j\") pod \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.038668 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-serving-cert\") pod \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.038822 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-oauth-config\") pod \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\" (UID: \"9fa2a78f-22e9-4a5a-9224-aa9db469d17a\") " Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.039137 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-service-ca" (OuterVolumeSpecName: "service-ca") pod "9fa2a78f-22e9-4a5a-9224-aa9db469d17a" (UID: "9fa2a78f-22e9-4a5a-9224-aa9db469d17a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.039153 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9fa2a78f-22e9-4a5a-9224-aa9db469d17a" (UID: "9fa2a78f-22e9-4a5a-9224-aa9db469d17a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.039247 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-config" (OuterVolumeSpecName: "console-config") pod "9fa2a78f-22e9-4a5a-9224-aa9db469d17a" (UID: "9fa2a78f-22e9-4a5a-9224-aa9db469d17a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.039270 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9fa2a78f-22e9-4a5a-9224-aa9db469d17a" (UID: "9fa2a78f-22e9-4a5a-9224-aa9db469d17a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.044978 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9fa2a78f-22e9-4a5a-9224-aa9db469d17a" (UID: "9fa2a78f-22e9-4a5a-9224-aa9db469d17a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.045497 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9fa2a78f-22e9-4a5a-9224-aa9db469d17a" (UID: "9fa2a78f-22e9-4a5a-9224-aa9db469d17a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.045609 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-kube-api-access-h6l6j" (OuterVolumeSpecName: "kube-api-access-h6l6j") pod "9fa2a78f-22e9-4a5a-9224-aa9db469d17a" (UID: "9fa2a78f-22e9-4a5a-9224-aa9db469d17a"). InnerVolumeSpecName "kube-api-access-h6l6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.140331 4814 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.140378 4814 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.140395 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.140414 4814 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.140429 4814 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.140445 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6l6j\" (UniqueName: \"kubernetes.io/projected/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-kube-api-access-h6l6j\") on node \"crc\" DevicePath \"\"" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.140461 4814 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa2a78f-22e9-4a5a-9224-aa9db469d17a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.467287 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-794sp_9fa2a78f-22e9-4a5a-9224-aa9db469d17a/console/0.log" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.467347 4814 generic.go:334] "Generic (PLEG): container finished" podID="9fa2a78f-22e9-4a5a-9224-aa9db469d17a" containerID="ada0d7617efd00cdedf1dd8054cb7203e6c32da8e2dedf2ac7a94b118b302261" exitCode=2 Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.467385 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-794sp" event={"ID":"9fa2a78f-22e9-4a5a-9224-aa9db469d17a","Type":"ContainerDied","Data":"ada0d7617efd00cdedf1dd8054cb7203e6c32da8e2dedf2ac7a94b118b302261"} Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.467428 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-794sp" event={"ID":"9fa2a78f-22e9-4a5a-9224-aa9db469d17a","Type":"ContainerDied","Data":"4ac1804bb587c342c49c16406a4a91d7ff8318e49600b4e9330168dd48289d93"} Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.467475 4814 scope.go:117] "RemoveContainer" containerID="ada0d7617efd00cdedf1dd8054cb7203e6c32da8e2dedf2ac7a94b118b302261" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.467491 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-794sp" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.488968 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-794sp"] Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.495457 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-794sp"] Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.504388 4814 scope.go:117] "RemoveContainer" containerID="ada0d7617efd00cdedf1dd8054cb7203e6c32da8e2dedf2ac7a94b118b302261" Dec 02 14:40:34 crc kubenswrapper[4814]: E1202 14:40:34.505707 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada0d7617efd00cdedf1dd8054cb7203e6c32da8e2dedf2ac7a94b118b302261\": container with ID starting with ada0d7617efd00cdedf1dd8054cb7203e6c32da8e2dedf2ac7a94b118b302261 not found: ID does not exist" containerID="ada0d7617efd00cdedf1dd8054cb7203e6c32da8e2dedf2ac7a94b118b302261" Dec 02 14:40:34 crc kubenswrapper[4814]: I1202 14:40:34.505826 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada0d7617efd00cdedf1dd8054cb7203e6c32da8e2dedf2ac7a94b118b302261"} err="failed to get container status \"ada0d7617efd00cdedf1dd8054cb7203e6c32da8e2dedf2ac7a94b118b302261\": rpc error: code = NotFound desc = could not find container \"ada0d7617efd00cdedf1dd8054cb7203e6c32da8e2dedf2ac7a94b118b302261\": container with ID starting with ada0d7617efd00cdedf1dd8054cb7203e6c32da8e2dedf2ac7a94b118b302261 not found: ID does not exist" Dec 02 14:40:36 crc kubenswrapper[4814]: I1202 14:40:36.152342 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa2a78f-22e9-4a5a-9224-aa9db469d17a" path="/var/lib/kubelet/pods/9fa2a78f-22e9-4a5a-9224-aa9db469d17a/volumes" Dec 02 14:40:36 crc kubenswrapper[4814]: I1202 14:40:36.484432 4814 generic.go:334] "Generic (PLEG): container finished" podID="99e6fcf2-2bab-4258-820c-09748fd9a378" containerID="8598724ffb663737407c456ccb9ad43ea82973b64452af4faf9d3204797e6d3b" exitCode=0 Dec 02 14:40:36 crc kubenswrapper[4814]: I1202 14:40:36.484483 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" event={"ID":"99e6fcf2-2bab-4258-820c-09748fd9a378","Type":"ContainerDied","Data":"8598724ffb663737407c456ccb9ad43ea82973b64452af4faf9d3204797e6d3b"} Dec 02 14:40:37 crc kubenswrapper[4814]: I1202 14:40:37.496556 4814 generic.go:334] "Generic (PLEG): container finished" podID="99e6fcf2-2bab-4258-820c-09748fd9a378" containerID="99a724dfa7725b29a85b68fe7b3f1807104c5a7ef0e13f66a7ecd266e856423b" exitCode=0 Dec 02 14:40:37 crc kubenswrapper[4814]: I1202 14:40:37.496618 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" event={"ID":"99e6fcf2-2bab-4258-820c-09748fd9a378","Type":"ContainerDied","Data":"99a724dfa7725b29a85b68fe7b3f1807104c5a7ef0e13f66a7ecd266e856423b"} Dec 02 14:40:38 crc kubenswrapper[4814]: I1202 14:40:38.737840 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" Dec 02 14:40:38 crc kubenswrapper[4814]: I1202 14:40:38.904802 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99e6fcf2-2bab-4258-820c-09748fd9a378-util\") pod \"99e6fcf2-2bab-4258-820c-09748fd9a378\" (UID: \"99e6fcf2-2bab-4258-820c-09748fd9a378\") " Dec 02 14:40:38 crc kubenswrapper[4814]: I1202 14:40:38.904879 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhtb2\" (UniqueName: \"kubernetes.io/projected/99e6fcf2-2bab-4258-820c-09748fd9a378-kube-api-access-hhtb2\") pod \"99e6fcf2-2bab-4258-820c-09748fd9a378\" (UID: \"99e6fcf2-2bab-4258-820c-09748fd9a378\") " Dec 02 14:40:38 crc kubenswrapper[4814]: I1202 14:40:38.904943 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99e6fcf2-2bab-4258-820c-09748fd9a378-bundle\") pod \"99e6fcf2-2bab-4258-820c-09748fd9a378\" (UID: \"99e6fcf2-2bab-4258-820c-09748fd9a378\") " Dec 02 14:40:38 crc kubenswrapper[4814]: I1202 14:40:38.906670 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99e6fcf2-2bab-4258-820c-09748fd9a378-bundle" (OuterVolumeSpecName: "bundle") pod "99e6fcf2-2bab-4258-820c-09748fd9a378" (UID: "99e6fcf2-2bab-4258-820c-09748fd9a378"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:40:38 crc kubenswrapper[4814]: I1202 14:40:38.912928 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99e6fcf2-2bab-4258-820c-09748fd9a378-kube-api-access-hhtb2" (OuterVolumeSpecName: "kube-api-access-hhtb2") pod "99e6fcf2-2bab-4258-820c-09748fd9a378" (UID: "99e6fcf2-2bab-4258-820c-09748fd9a378"). InnerVolumeSpecName "kube-api-access-hhtb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:40:38 crc kubenswrapper[4814]: I1202 14:40:38.926037 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99e6fcf2-2bab-4258-820c-09748fd9a378-util" (OuterVolumeSpecName: "util") pod "99e6fcf2-2bab-4258-820c-09748fd9a378" (UID: "99e6fcf2-2bab-4258-820c-09748fd9a378"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:40:39 crc kubenswrapper[4814]: I1202 14:40:39.006880 4814 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99e6fcf2-2bab-4258-820c-09748fd9a378-util\") on node \"crc\" DevicePath \"\"" Dec 02 14:40:39 crc kubenswrapper[4814]: I1202 14:40:39.006918 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhtb2\" (UniqueName: \"kubernetes.io/projected/99e6fcf2-2bab-4258-820c-09748fd9a378-kube-api-access-hhtb2\") on node \"crc\" DevicePath \"\"" Dec 02 14:40:39 crc kubenswrapper[4814]: I1202 14:40:39.006932 4814 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99e6fcf2-2bab-4258-820c-09748fd9a378-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:40:39 crc kubenswrapper[4814]: I1202 14:40:39.510695 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" event={"ID":"99e6fcf2-2bab-4258-820c-09748fd9a378","Type":"ContainerDied","Data":"1724e15d0367ae89c30b85d7e077464d23c16284863c044dcac46b5ae21d5594"} Dec 02 14:40:39 crc kubenswrapper[4814]: I1202 14:40:39.510756 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1724e15d0367ae89c30b85d7e077464d23c16284863c044dcac46b5ae21d5594" Dec 02 14:40:39 crc kubenswrapper[4814]: I1202 14:40:39.510796 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.739795 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm"] Dec 02 14:40:47 crc kubenswrapper[4814]: E1202 14:40:47.740744 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99e6fcf2-2bab-4258-820c-09748fd9a378" containerName="pull" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.740768 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="99e6fcf2-2bab-4258-820c-09748fd9a378" containerName="pull" Dec 02 14:40:47 crc kubenswrapper[4814]: E1202 14:40:47.740791 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99e6fcf2-2bab-4258-820c-09748fd9a378" containerName="util" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.740800 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="99e6fcf2-2bab-4258-820c-09748fd9a378" containerName="util" Dec 02 14:40:47 crc kubenswrapper[4814]: E1202 14:40:47.740818 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa2a78f-22e9-4a5a-9224-aa9db469d17a" containerName="console" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.740826 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa2a78f-22e9-4a5a-9224-aa9db469d17a" containerName="console" Dec 02 14:40:47 crc kubenswrapper[4814]: E1202 14:40:47.740835 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99e6fcf2-2bab-4258-820c-09748fd9a378" containerName="extract" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.740841 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="99e6fcf2-2bab-4258-820c-09748fd9a378" containerName="extract" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.741085 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="99e6fcf2-2bab-4258-820c-09748fd9a378" containerName="extract" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.741103 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa2a78f-22e9-4a5a-9224-aa9db469d17a" containerName="console" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.741833 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.747887 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.747887 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-w5g9s" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.749241 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.749615 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.749682 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.758837 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm"] Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.833690 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a489ded-6343-4d09-a0a9-e61cc452e979-apiservice-cert\") pod \"metallb-operator-controller-manager-6b695cd4c4-skhtm\" (UID: \"4a489ded-6343-4d09-a0a9-e61cc452e979\") " pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.833925 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a489ded-6343-4d09-a0a9-e61cc452e979-webhook-cert\") pod \"metallb-operator-controller-manager-6b695cd4c4-skhtm\" (UID: \"4a489ded-6343-4d09-a0a9-e61cc452e979\") " pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.834043 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b58zx\" (UniqueName: \"kubernetes.io/projected/4a489ded-6343-4d09-a0a9-e61cc452e979-kube-api-access-b58zx\") pod \"metallb-operator-controller-manager-6b695cd4c4-skhtm\" (UID: \"4a489ded-6343-4d09-a0a9-e61cc452e979\") " pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.935153 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a489ded-6343-4d09-a0a9-e61cc452e979-apiservice-cert\") pod \"metallb-operator-controller-manager-6b695cd4c4-skhtm\" (UID: \"4a489ded-6343-4d09-a0a9-e61cc452e979\") " pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.935261 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a489ded-6343-4d09-a0a9-e61cc452e979-webhook-cert\") pod \"metallb-operator-controller-manager-6b695cd4c4-skhtm\" (UID: \"4a489ded-6343-4d09-a0a9-e61cc452e979\") " pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.935383 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b58zx\" (UniqueName: \"kubernetes.io/projected/4a489ded-6343-4d09-a0a9-e61cc452e979-kube-api-access-b58zx\") pod \"metallb-operator-controller-manager-6b695cd4c4-skhtm\" (UID: \"4a489ded-6343-4d09-a0a9-e61cc452e979\") " pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.940284 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a489ded-6343-4d09-a0a9-e61cc452e979-apiservice-cert\") pod \"metallb-operator-controller-manager-6b695cd4c4-skhtm\" (UID: \"4a489ded-6343-4d09-a0a9-e61cc452e979\") " pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.946853 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a489ded-6343-4d09-a0a9-e61cc452e979-webhook-cert\") pod \"metallb-operator-controller-manager-6b695cd4c4-skhtm\" (UID: \"4a489ded-6343-4d09-a0a9-e61cc452e979\") " pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" Dec 02 14:40:47 crc kubenswrapper[4814]: I1202 14:40:47.956854 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b58zx\" (UniqueName: \"kubernetes.io/projected/4a489ded-6343-4d09-a0a9-e61cc452e979-kube-api-access-b58zx\") pod \"metallb-operator-controller-manager-6b695cd4c4-skhtm\" (UID: \"4a489ded-6343-4d09-a0a9-e61cc452e979\") " pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.066740 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.070803 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz"] Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.071677 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.074221 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-sxv6g" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.074417 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.074415 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.088936 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz"] Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.238846 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl4mz\" (UniqueName: \"kubernetes.io/projected/9eabc75b-76b2-4d40-9055-b0b7906344a6-kube-api-access-hl4mz\") pod \"metallb-operator-webhook-server-866879c6dc-gggxz\" (UID: \"9eabc75b-76b2-4d40-9055-b0b7906344a6\") " pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.239185 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9eabc75b-76b2-4d40-9055-b0b7906344a6-apiservice-cert\") pod \"metallb-operator-webhook-server-866879c6dc-gggxz\" (UID: \"9eabc75b-76b2-4d40-9055-b0b7906344a6\") " pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.239213 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9eabc75b-76b2-4d40-9055-b0b7906344a6-webhook-cert\") pod \"metallb-operator-webhook-server-866879c6dc-gggxz\" (UID: \"9eabc75b-76b2-4d40-9055-b0b7906344a6\") " pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.340117 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl4mz\" (UniqueName: \"kubernetes.io/projected/9eabc75b-76b2-4d40-9055-b0b7906344a6-kube-api-access-hl4mz\") pod \"metallb-operator-webhook-server-866879c6dc-gggxz\" (UID: \"9eabc75b-76b2-4d40-9055-b0b7906344a6\") " pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.340195 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9eabc75b-76b2-4d40-9055-b0b7906344a6-apiservice-cert\") pod \"metallb-operator-webhook-server-866879c6dc-gggxz\" (UID: \"9eabc75b-76b2-4d40-9055-b0b7906344a6\") " pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.340243 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9eabc75b-76b2-4d40-9055-b0b7906344a6-webhook-cert\") pod \"metallb-operator-webhook-server-866879c6dc-gggxz\" (UID: \"9eabc75b-76b2-4d40-9055-b0b7906344a6\") " pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.347394 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9eabc75b-76b2-4d40-9055-b0b7906344a6-webhook-cert\") pod \"metallb-operator-webhook-server-866879c6dc-gggxz\" (UID: \"9eabc75b-76b2-4d40-9055-b0b7906344a6\") " pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.347485 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9eabc75b-76b2-4d40-9055-b0b7906344a6-apiservice-cert\") pod \"metallb-operator-webhook-server-866879c6dc-gggxz\" (UID: \"9eabc75b-76b2-4d40-9055-b0b7906344a6\") " pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.351779 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm"] Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.360157 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl4mz\" (UniqueName: \"kubernetes.io/projected/9eabc75b-76b2-4d40-9055-b0b7906344a6-kube-api-access-hl4mz\") pod \"metallb-operator-webhook-server-866879c6dc-gggxz\" (UID: \"9eabc75b-76b2-4d40-9055-b0b7906344a6\") " pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" Dec 02 14:40:48 crc kubenswrapper[4814]: W1202 14:40:48.377983 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a489ded_6343_4d09_a0a9_e61cc452e979.slice/crio-5d04281119d8597d70ddbce2267c54889e53aa7f78f233ef2d12d7037a31cf87 WatchSource:0}: Error finding container 5d04281119d8597d70ddbce2267c54889e53aa7f78f233ef2d12d7037a31cf87: Status 404 returned error can't find the container with id 5d04281119d8597d70ddbce2267c54889e53aa7f78f233ef2d12d7037a31cf87 Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.428318 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.566745 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" event={"ID":"4a489ded-6343-4d09-a0a9-e61cc452e979","Type":"ContainerStarted","Data":"5d04281119d8597d70ddbce2267c54889e53aa7f78f233ef2d12d7037a31cf87"} Dec 02 14:40:48 crc kubenswrapper[4814]: I1202 14:40:48.881197 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz"] Dec 02 14:40:48 crc kubenswrapper[4814]: W1202 14:40:48.891653 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eabc75b_76b2_4d40_9055_b0b7906344a6.slice/crio-6bd4c68c0d541b6a6ae2825bb1fb4a6b276baa7cfd2ec61b2a73d63fd14fbead WatchSource:0}: Error finding container 6bd4c68c0d541b6a6ae2825bb1fb4a6b276baa7cfd2ec61b2a73d63fd14fbead: Status 404 returned error can't find the container with id 6bd4c68c0d541b6a6ae2825bb1fb4a6b276baa7cfd2ec61b2a73d63fd14fbead Dec 02 14:40:49 crc kubenswrapper[4814]: I1202 14:40:49.574634 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" event={"ID":"9eabc75b-76b2-4d40-9055-b0b7906344a6","Type":"ContainerStarted","Data":"6bd4c68c0d541b6a6ae2825bb1fb4a6b276baa7cfd2ec61b2a73d63fd14fbead"} Dec 02 14:40:54 crc kubenswrapper[4814]: I1202 14:40:54.612391 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" event={"ID":"9eabc75b-76b2-4d40-9055-b0b7906344a6","Type":"ContainerStarted","Data":"8d501f9aea877f1c33aadfc7a152161908f4f77a7dd014908ef36fcd6d716940"} Dec 02 14:40:54 crc kubenswrapper[4814]: I1202 14:40:54.612977 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" Dec 02 14:40:54 crc kubenswrapper[4814]: I1202 14:40:54.614548 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" event={"ID":"4a489ded-6343-4d09-a0a9-e61cc452e979","Type":"ContainerStarted","Data":"95f7a32e6d392cabc0c589016db5247f98a025d0327ada669560d7f8f68c8b4e"} Dec 02 14:40:54 crc kubenswrapper[4814]: I1202 14:40:54.614742 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" Dec 02 14:40:54 crc kubenswrapper[4814]: I1202 14:40:54.636451 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" podStartSLOduration=1.578237384 podStartE2EDuration="6.636430855s" podCreationTimestamp="2025-12-02 14:40:48 +0000 UTC" firstStartedPulling="2025-12-02 14:40:48.894498763 +0000 UTC m=+711.018860967" lastFinishedPulling="2025-12-02 14:40:53.952692234 +0000 UTC m=+716.077054438" observedRunningTime="2025-12-02 14:40:54.632026587 +0000 UTC m=+716.756388821" watchObservedRunningTime="2025-12-02 14:40:54.636430855 +0000 UTC m=+716.760793099" Dec 02 14:40:54 crc kubenswrapper[4814]: I1202 14:40:54.678623 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" podStartSLOduration=2.134762106 podStartE2EDuration="7.678593346s" podCreationTimestamp="2025-12-02 14:40:47 +0000 UTC" firstStartedPulling="2025-12-02 14:40:48.381514959 +0000 UTC m=+710.505877163" lastFinishedPulling="2025-12-02 14:40:53.925346199 +0000 UTC m=+716.049708403" observedRunningTime="2025-12-02 14:40:54.669905672 +0000 UTC m=+716.794267916" watchObservedRunningTime="2025-12-02 14:40:54.678593346 +0000 UTC m=+716.802955600" Dec 02 14:41:08 crc kubenswrapper[4814]: I1202 14:41:08.433884 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-866879c6dc-gggxz" Dec 02 14:41:20 crc kubenswrapper[4814]: I1202 14:41:20.651730 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:41:20 crc kubenswrapper[4814]: I1202 14:41:20.652285 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.069986 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6b695cd4c4-skhtm" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.801292 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-h5cxr"] Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.804664 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.806754 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-74bjp" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.806801 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn"] Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.806974 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.807449 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.807794 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.809073 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.825330 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn"] Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.901846 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/88b9b5e7-b351-4789-9739-8ed7fbe7269b-frr-sockets\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.901900 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88b9b5e7-b351-4789-9739-8ed7fbe7269b-metrics-certs\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.901953 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/88b9b5e7-b351-4789-9739-8ed7fbe7269b-frr-startup\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.902085 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h79jf\" (UniqueName: \"kubernetes.io/projected/8a69012b-d3a3-47ba-847f-5c171ca07026-kube-api-access-h79jf\") pod \"frr-k8s-webhook-server-7fcb986d4-z95mn\" (UID: \"8a69012b-d3a3-47ba-847f-5c171ca07026\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.902154 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/88b9b5e7-b351-4789-9739-8ed7fbe7269b-frr-conf\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.902200 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs9n5\" (UniqueName: \"kubernetes.io/projected/88b9b5e7-b351-4789-9739-8ed7fbe7269b-kube-api-access-qs9n5\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.902353 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a69012b-d3a3-47ba-847f-5c171ca07026-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-z95mn\" (UID: \"8a69012b-d3a3-47ba-847f-5c171ca07026\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.902393 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/88b9b5e7-b351-4789-9739-8ed7fbe7269b-metrics\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.902426 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/88b9b5e7-b351-4789-9739-8ed7fbe7269b-reloader\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.921913 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-8g52s"] Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.923217 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8g52s" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.925048 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.925773 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-c59qp" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.926037 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.926095 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.946115 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-4vgbp"] Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.947502 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-4vgbp" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.949481 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 02 14:41:28 crc kubenswrapper[4814]: I1202 14:41:28.967196 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-4vgbp"] Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.003634 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ffa654b-5a5c-44c2-9b37-85e00c8de913-memberlist\") pod \"speaker-8g52s\" (UID: \"7ffa654b-5a5c-44c2-9b37-85e00c8de913\") " pod="metallb-system/speaker-8g52s" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.003692 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h79jf\" (UniqueName: \"kubernetes.io/projected/8a69012b-d3a3-47ba-847f-5c171ca07026-kube-api-access-h79jf\") pod \"frr-k8s-webhook-server-7fcb986d4-z95mn\" (UID: \"8a69012b-d3a3-47ba-847f-5c171ca07026\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.003731 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7ffa654b-5a5c-44c2-9b37-85e00c8de913-metallb-excludel2\") pod \"speaker-8g52s\" (UID: \"7ffa654b-5a5c-44c2-9b37-85e00c8de913\") " pod="metallb-system/speaker-8g52s" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.003751 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/88b9b5e7-b351-4789-9739-8ed7fbe7269b-frr-conf\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.003796 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs9n5\" (UniqueName: \"kubernetes.io/projected/88b9b5e7-b351-4789-9739-8ed7fbe7269b-kube-api-access-qs9n5\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.003827 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ffa654b-5a5c-44c2-9b37-85e00c8de913-metrics-certs\") pod \"speaker-8g52s\" (UID: \"7ffa654b-5a5c-44c2-9b37-85e00c8de913\") " pod="metallb-system/speaker-8g52s" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.003845 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a69012b-d3a3-47ba-847f-5c171ca07026-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-z95mn\" (UID: \"8a69012b-d3a3-47ba-847f-5c171ca07026\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.003939 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/88b9b5e7-b351-4789-9739-8ed7fbe7269b-metrics\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.003994 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/88b9b5e7-b351-4789-9739-8ed7fbe7269b-reloader\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.004046 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38d43829-f3bd-4d95-9a6f-5e89f3f0f30a-metrics-certs\") pod \"controller-f8648f98b-4vgbp\" (UID: \"38d43829-f3bd-4d95-9a6f-5e89f3f0f30a\") " pod="metallb-system/controller-f8648f98b-4vgbp" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.004101 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/88b9b5e7-b351-4789-9739-8ed7fbe7269b-frr-sockets\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.004118 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38d43829-f3bd-4d95-9a6f-5e89f3f0f30a-cert\") pod \"controller-f8648f98b-4vgbp\" (UID: \"38d43829-f3bd-4d95-9a6f-5e89f3f0f30a\") " pod="metallb-system/controller-f8648f98b-4vgbp" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.004137 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88b9b5e7-b351-4789-9739-8ed7fbe7269b-metrics-certs\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.004173 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6p7n\" (UniqueName: \"kubernetes.io/projected/7ffa654b-5a5c-44c2-9b37-85e00c8de913-kube-api-access-r6p7n\") pod \"speaker-8g52s\" (UID: \"7ffa654b-5a5c-44c2-9b37-85e00c8de913\") " pod="metallb-system/speaker-8g52s" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.004265 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/88b9b5e7-b351-4789-9739-8ed7fbe7269b-frr-startup\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.004304 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25rnl\" (UniqueName: \"kubernetes.io/projected/38d43829-f3bd-4d95-9a6f-5e89f3f0f30a-kube-api-access-25rnl\") pod \"controller-f8648f98b-4vgbp\" (UID: \"38d43829-f3bd-4d95-9a6f-5e89f3f0f30a\") " pod="metallb-system/controller-f8648f98b-4vgbp" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.004509 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/88b9b5e7-b351-4789-9739-8ed7fbe7269b-frr-conf\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.004539 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/88b9b5e7-b351-4789-9739-8ed7fbe7269b-metrics\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: E1202 14:41:29.004613 4814 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.004655 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/88b9b5e7-b351-4789-9739-8ed7fbe7269b-reloader\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.004694 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/88b9b5e7-b351-4789-9739-8ed7fbe7269b-frr-sockets\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: E1202 14:41:29.004666 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a69012b-d3a3-47ba-847f-5c171ca07026-cert podName:8a69012b-d3a3-47ba-847f-5c171ca07026 nodeName:}" failed. No retries permitted until 2025-12-02 14:41:29.504649734 +0000 UTC m=+751.629011938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8a69012b-d3a3-47ba-847f-5c171ca07026-cert") pod "frr-k8s-webhook-server-7fcb986d4-z95mn" (UID: "8a69012b-d3a3-47ba-847f-5c171ca07026") : secret "frr-k8s-webhook-server-cert" not found Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.005432 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/88b9b5e7-b351-4789-9739-8ed7fbe7269b-frr-startup\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.018556 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88b9b5e7-b351-4789-9739-8ed7fbe7269b-metrics-certs\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.019292 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h79jf\" (UniqueName: \"kubernetes.io/projected/8a69012b-d3a3-47ba-847f-5c171ca07026-kube-api-access-h79jf\") pod \"frr-k8s-webhook-server-7fcb986d4-z95mn\" (UID: \"8a69012b-d3a3-47ba-847f-5c171ca07026\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.019912 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs9n5\" (UniqueName: \"kubernetes.io/projected/88b9b5e7-b351-4789-9739-8ed7fbe7269b-kube-api-access-qs9n5\") pod \"frr-k8s-h5cxr\" (UID: \"88b9b5e7-b351-4789-9739-8ed7fbe7269b\") " pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.105659 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ffa654b-5a5c-44c2-9b37-85e00c8de913-metrics-certs\") pod \"speaker-8g52s\" (UID: \"7ffa654b-5a5c-44c2-9b37-85e00c8de913\") " pod="metallb-system/speaker-8g52s" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.106078 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38d43829-f3bd-4d95-9a6f-5e89f3f0f30a-metrics-certs\") pod \"controller-f8648f98b-4vgbp\" (UID: \"38d43829-f3bd-4d95-9a6f-5e89f3f0f30a\") " pod="metallb-system/controller-f8648f98b-4vgbp" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.106110 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38d43829-f3bd-4d95-9a6f-5e89f3f0f30a-cert\") pod \"controller-f8648f98b-4vgbp\" (UID: \"38d43829-f3bd-4d95-9a6f-5e89f3f0f30a\") " pod="metallb-system/controller-f8648f98b-4vgbp" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.106159 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6p7n\" (UniqueName: \"kubernetes.io/projected/7ffa654b-5a5c-44c2-9b37-85e00c8de913-kube-api-access-r6p7n\") pod \"speaker-8g52s\" (UID: \"7ffa654b-5a5c-44c2-9b37-85e00c8de913\") " pod="metallb-system/speaker-8g52s" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.106210 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25rnl\" (UniqueName: \"kubernetes.io/projected/38d43829-f3bd-4d95-9a6f-5e89f3f0f30a-kube-api-access-25rnl\") pod \"controller-f8648f98b-4vgbp\" (UID: \"38d43829-f3bd-4d95-9a6f-5e89f3f0f30a\") " pod="metallb-system/controller-f8648f98b-4vgbp" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.106251 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ffa654b-5a5c-44c2-9b37-85e00c8de913-memberlist\") pod \"speaker-8g52s\" (UID: \"7ffa654b-5a5c-44c2-9b37-85e00c8de913\") " pod="metallb-system/speaker-8g52s" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.106270 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7ffa654b-5a5c-44c2-9b37-85e00c8de913-metallb-excludel2\") pod \"speaker-8g52s\" (UID: \"7ffa654b-5a5c-44c2-9b37-85e00c8de913\") " pod="metallb-system/speaker-8g52s" Dec 02 14:41:29 crc kubenswrapper[4814]: E1202 14:41:29.107913 4814 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 02 14:41:29 crc kubenswrapper[4814]: E1202 14:41:29.107983 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ffa654b-5a5c-44c2-9b37-85e00c8de913-memberlist podName:7ffa654b-5a5c-44c2-9b37-85e00c8de913 nodeName:}" failed. No retries permitted until 2025-12-02 14:41:29.607967585 +0000 UTC m=+751.732329789 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7ffa654b-5a5c-44c2-9b37-85e00c8de913-memberlist") pod "speaker-8g52s" (UID: "7ffa654b-5a5c-44c2-9b37-85e00c8de913") : secret "metallb-memberlist" not found Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.112635 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.113635 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ffa654b-5a5c-44c2-9b37-85e00c8de913-metrics-certs\") pod \"speaker-8g52s\" (UID: \"7ffa654b-5a5c-44c2-9b37-85e00c8de913\") " pod="metallb-system/speaker-8g52s" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.114928 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38d43829-f3bd-4d95-9a6f-5e89f3f0f30a-metrics-certs\") pod \"controller-f8648f98b-4vgbp\" (UID: \"38d43829-f3bd-4d95-9a6f-5e89f3f0f30a\") " pod="metallb-system/controller-f8648f98b-4vgbp" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.129803 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.130011 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38d43829-f3bd-4d95-9a6f-5e89f3f0f30a-cert\") pod \"controller-f8648f98b-4vgbp\" (UID: \"38d43829-f3bd-4d95-9a6f-5e89f3f0f30a\") " pod="metallb-system/controller-f8648f98b-4vgbp" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.145943 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6p7n\" (UniqueName: \"kubernetes.io/projected/7ffa654b-5a5c-44c2-9b37-85e00c8de913-kube-api-access-r6p7n\") pod \"speaker-8g52s\" (UID: \"7ffa654b-5a5c-44c2-9b37-85e00c8de913\") " pod="metallb-system/speaker-8g52s" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.149070 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25rnl\" (UniqueName: \"kubernetes.io/projected/38d43829-f3bd-4d95-9a6f-5e89f3f0f30a-kube-api-access-25rnl\") pod \"controller-f8648f98b-4vgbp\" (UID: \"38d43829-f3bd-4d95-9a6f-5e89f3f0f30a\") " pod="metallb-system/controller-f8648f98b-4vgbp" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.163974 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7ffa654b-5a5c-44c2-9b37-85e00c8de913-metallb-excludel2\") pod \"speaker-8g52s\" (UID: \"7ffa654b-5a5c-44c2-9b37-85e00c8de913\") " pod="metallb-system/speaker-8g52s" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.269489 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-4vgbp" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.522136 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a69012b-d3a3-47ba-847f-5c171ca07026-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-z95mn\" (UID: \"8a69012b-d3a3-47ba-847f-5c171ca07026\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.536997 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a69012b-d3a3-47ba-847f-5c171ca07026-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-z95mn\" (UID: \"8a69012b-d3a3-47ba-847f-5c171ca07026\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.546628 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-4vgbp"] Dec 02 14:41:29 crc kubenswrapper[4814]: W1202 14:41:29.553604 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38d43829_f3bd_4d95_9a6f_5e89f3f0f30a.slice/crio-b6d071d1f50d56a8bbe80d231e8ff45d0bf6fbcbe76b907c55a62432db9f0439 WatchSource:0}: Error finding container b6d071d1f50d56a8bbe80d231e8ff45d0bf6fbcbe76b907c55a62432db9f0439: Status 404 returned error can't find the container with id b6d071d1f50d56a8bbe80d231e8ff45d0bf6fbcbe76b907c55a62432db9f0439 Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.623773 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ffa654b-5a5c-44c2-9b37-85e00c8de913-memberlist\") pod \"speaker-8g52s\" (UID: \"7ffa654b-5a5c-44c2-9b37-85e00c8de913\") " pod="metallb-system/speaker-8g52s" Dec 02 14:41:29 crc kubenswrapper[4814]: E1202 14:41:29.623943 4814 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 02 14:41:29 crc kubenswrapper[4814]: E1202 14:41:29.624038 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ffa654b-5a5c-44c2-9b37-85e00c8de913-memberlist podName:7ffa654b-5a5c-44c2-9b37-85e00c8de913 nodeName:}" failed. No retries permitted until 2025-12-02 14:41:30.624019663 +0000 UTC m=+752.748381867 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7ffa654b-5a5c-44c2-9b37-85e00c8de913-memberlist") pod "speaker-8g52s" (UID: "7ffa654b-5a5c-44c2-9b37-85e00c8de913") : secret "metallb-memberlist" not found Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.735392 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn" Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.867521 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-4vgbp" event={"ID":"38d43829-f3bd-4d95-9a6f-5e89f3f0f30a","Type":"ContainerStarted","Data":"b6d071d1f50d56a8bbe80d231e8ff45d0bf6fbcbe76b907c55a62432db9f0439"} Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.868628 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h5cxr" event={"ID":"88b9b5e7-b351-4789-9739-8ed7fbe7269b","Type":"ContainerStarted","Data":"1b89554781143b20f31dc73f283d48847893f4e5c1a4573e640534c75c082f80"} Dec 02 14:41:29 crc kubenswrapper[4814]: I1202 14:41:29.974371 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn"] Dec 02 14:41:30 crc kubenswrapper[4814]: I1202 14:41:30.257266 4814 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 14:41:30 crc kubenswrapper[4814]: I1202 14:41:30.637031 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ffa654b-5a5c-44c2-9b37-85e00c8de913-memberlist\") pod \"speaker-8g52s\" (UID: \"7ffa654b-5a5c-44c2-9b37-85e00c8de913\") " pod="metallb-system/speaker-8g52s" Dec 02 14:41:30 crc kubenswrapper[4814]: I1202 14:41:30.644100 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ffa654b-5a5c-44c2-9b37-85e00c8de913-memberlist\") pod \"speaker-8g52s\" (UID: \"7ffa654b-5a5c-44c2-9b37-85e00c8de913\") " pod="metallb-system/speaker-8g52s" Dec 02 14:41:30 crc kubenswrapper[4814]: I1202 14:41:30.740317 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8g52s" Dec 02 14:41:30 crc kubenswrapper[4814]: W1202 14:41:30.760856 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ffa654b_5a5c_44c2_9b37_85e00c8de913.slice/crio-47894b9ae0fc4c5108e41c6ccba2a2b7bc397ab1d130a00226ffe6470c38a519 WatchSource:0}: Error finding container 47894b9ae0fc4c5108e41c6ccba2a2b7bc397ab1d130a00226ffe6470c38a519: Status 404 returned error can't find the container with id 47894b9ae0fc4c5108e41c6ccba2a2b7bc397ab1d130a00226ffe6470c38a519 Dec 02 14:41:30 crc kubenswrapper[4814]: I1202 14:41:30.888084 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn" event={"ID":"8a69012b-d3a3-47ba-847f-5c171ca07026","Type":"ContainerStarted","Data":"ad267a2aec9ede75e57ee3b2df6e6ba2ff6317fb2094d2b2b6c5cae927bd2cdd"} Dec 02 14:41:30 crc kubenswrapper[4814]: I1202 14:41:30.891889 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-4vgbp" event={"ID":"38d43829-f3bd-4d95-9a6f-5e89f3f0f30a","Type":"ContainerStarted","Data":"303223f59d9078547d448558e5ae8fba652af377911bbe031dd317982c7bb52e"} Dec 02 14:41:30 crc kubenswrapper[4814]: I1202 14:41:30.891951 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-4vgbp" event={"ID":"38d43829-f3bd-4d95-9a6f-5e89f3f0f30a","Type":"ContainerStarted","Data":"a6ca421db3c13e7fc0b1ef51ab2fcd0e762956dc3f9f5a6852df2841a9d769b7"} Dec 02 14:41:30 crc kubenswrapper[4814]: I1202 14:41:30.893142 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-4vgbp" Dec 02 14:41:30 crc kubenswrapper[4814]: I1202 14:41:30.899186 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8g52s" event={"ID":"7ffa654b-5a5c-44c2-9b37-85e00c8de913","Type":"ContainerStarted","Data":"47894b9ae0fc4c5108e41c6ccba2a2b7bc397ab1d130a00226ffe6470c38a519"} Dec 02 14:41:30 crc kubenswrapper[4814]: I1202 14:41:30.912619 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-4vgbp" podStartSLOduration=2.912603637 podStartE2EDuration="2.912603637s" podCreationTimestamp="2025-12-02 14:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:41:30.912087755 +0000 UTC m=+753.036449969" watchObservedRunningTime="2025-12-02 14:41:30.912603637 +0000 UTC m=+753.036965841" Dec 02 14:41:31 crc kubenswrapper[4814]: I1202 14:41:31.907485 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8g52s" event={"ID":"7ffa654b-5a5c-44c2-9b37-85e00c8de913","Type":"ContainerStarted","Data":"5cc38e0effc40579d7d9884c66e82a4dae87d780899018961a162aaadcdbce2d"} Dec 02 14:41:31 crc kubenswrapper[4814]: I1202 14:41:31.907878 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8g52s" event={"ID":"7ffa654b-5a5c-44c2-9b37-85e00c8de913","Type":"ContainerStarted","Data":"816c9b02fea4ec4e004b5ec22c0897a222be98310f39b155e25665c169cfd286"} Dec 02 14:41:31 crc kubenswrapper[4814]: I1202 14:41:31.927968 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-8g52s" podStartSLOduration=3.927951024 podStartE2EDuration="3.927951024s" podCreationTimestamp="2025-12-02 14:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:41:31.925516956 +0000 UTC m=+754.049879160" watchObservedRunningTime="2025-12-02 14:41:31.927951024 +0000 UTC m=+754.052313228" Dec 02 14:41:32 crc kubenswrapper[4814]: I1202 14:41:32.913246 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8g52s" Dec 02 14:41:37 crc kubenswrapper[4814]: I1202 14:41:37.960083 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn" event={"ID":"8a69012b-d3a3-47ba-847f-5c171ca07026","Type":"ContainerStarted","Data":"bbc44f40abf8284bc058b1fc7a884bb6a344a8b3093cf77616c87ac6835f924b"} Dec 02 14:41:37 crc kubenswrapper[4814]: I1202 14:41:37.961620 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn" Dec 02 14:41:37 crc kubenswrapper[4814]: I1202 14:41:37.963402 4814 generic.go:334] "Generic (PLEG): container finished" podID="88b9b5e7-b351-4789-9739-8ed7fbe7269b" containerID="4fdcdb9d7e91d1dfeeb868f80da92b0d62403f13a14d0cc58c7baa7594968e66" exitCode=0 Dec 02 14:41:37 crc kubenswrapper[4814]: I1202 14:41:37.963437 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h5cxr" event={"ID":"88b9b5e7-b351-4789-9739-8ed7fbe7269b","Type":"ContainerDied","Data":"4fdcdb9d7e91d1dfeeb868f80da92b0d62403f13a14d0cc58c7baa7594968e66"} Dec 02 14:41:37 crc kubenswrapper[4814]: I1202 14:41:37.998776 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn" podStartSLOduration=2.641131972 podStartE2EDuration="9.998756213s" podCreationTimestamp="2025-12-02 14:41:28 +0000 UTC" firstStartedPulling="2025-12-02 14:41:29.975999794 +0000 UTC m=+752.100362008" lastFinishedPulling="2025-12-02 14:41:37.333624005 +0000 UTC m=+759.457986249" observedRunningTime="2025-12-02 14:41:37.981719888 +0000 UTC m=+760.106082092" watchObservedRunningTime="2025-12-02 14:41:37.998756213 +0000 UTC m=+760.123118417" Dec 02 14:41:38 crc kubenswrapper[4814]: I1202 14:41:38.971863 4814 generic.go:334] "Generic (PLEG): container finished" podID="88b9b5e7-b351-4789-9739-8ed7fbe7269b" containerID="17916c818c929d5aef0ad13dd38011c11b2f21e481586ea4a9b2803d1cde6f7c" exitCode=0 Dec 02 14:41:38 crc kubenswrapper[4814]: I1202 14:41:38.972015 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h5cxr" event={"ID":"88b9b5e7-b351-4789-9739-8ed7fbe7269b","Type":"ContainerDied","Data":"17916c818c929d5aef0ad13dd38011c11b2f21e481586ea4a9b2803d1cde6f7c"} Dec 02 14:41:39 crc kubenswrapper[4814]: I1202 14:41:39.980536 4814 generic.go:334] "Generic (PLEG): container finished" podID="88b9b5e7-b351-4789-9739-8ed7fbe7269b" containerID="9eaa86bf5917f5701694a339f281495fdd1c18aaffa6a3e82ac26b29ad7907b3" exitCode=0 Dec 02 14:41:39 crc kubenswrapper[4814]: I1202 14:41:39.981450 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h5cxr" event={"ID":"88b9b5e7-b351-4789-9739-8ed7fbe7269b","Type":"ContainerDied","Data":"9eaa86bf5917f5701694a339f281495fdd1c18aaffa6a3e82ac26b29ad7907b3"} Dec 02 14:41:40 crc kubenswrapper[4814]: I1202 14:41:40.991773 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h5cxr" event={"ID":"88b9b5e7-b351-4789-9739-8ed7fbe7269b","Type":"ContainerStarted","Data":"9118de1ce4554ca674971adadd8578158d54a8bb0665b32c85c388a52222e90f"} Dec 02 14:41:40 crc kubenswrapper[4814]: I1202 14:41:40.992081 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h5cxr" event={"ID":"88b9b5e7-b351-4789-9739-8ed7fbe7269b","Type":"ContainerStarted","Data":"b9f3c8292ddda357db470239924c4eddb55e409f18b8eb69cdd1a027f606f47c"} Dec 02 14:41:40 crc kubenswrapper[4814]: I1202 14:41:40.992095 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h5cxr" event={"ID":"88b9b5e7-b351-4789-9739-8ed7fbe7269b","Type":"ContainerStarted","Data":"5922ab0a283b7c5cf9155897268a45ad13a9b4c869867458630b8eaee4b73c51"} Dec 02 14:41:40 crc kubenswrapper[4814]: I1202 14:41:40.992105 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h5cxr" event={"ID":"88b9b5e7-b351-4789-9739-8ed7fbe7269b","Type":"ContainerStarted","Data":"95f866101b501e74e64ba79bca3362721ed0defd8c7dbe84b9fb9937164ee016"} Dec 02 14:41:40 crc kubenswrapper[4814]: I1202 14:41:40.992114 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h5cxr" event={"ID":"88b9b5e7-b351-4789-9739-8ed7fbe7269b","Type":"ContainerStarted","Data":"951ccb702c91de49c35af8d266078202aebe7c70f4acd270595dec9e10a0b3f6"} Dec 02 14:41:42 crc kubenswrapper[4814]: I1202 14:41:42.003693 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h5cxr" event={"ID":"88b9b5e7-b351-4789-9739-8ed7fbe7269b","Type":"ContainerStarted","Data":"1933f5bc656cd23e5e958a4a2d741e897d741763ac667199f91ba78d96b6a12f"} Dec 02 14:41:42 crc kubenswrapper[4814]: I1202 14:41:42.003921 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:42 crc kubenswrapper[4814]: I1202 14:41:42.027331 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-h5cxr" podStartSLOduration=6.244816013 podStartE2EDuration="14.027312321s" podCreationTimestamp="2025-12-02 14:41:28 +0000 UTC" firstStartedPulling="2025-12-02 14:41:29.530303031 +0000 UTC m=+751.654665245" lastFinishedPulling="2025-12-02 14:41:37.312799329 +0000 UTC m=+759.437161553" observedRunningTime="2025-12-02 14:41:42.02559569 +0000 UTC m=+764.149957894" watchObservedRunningTime="2025-12-02 14:41:42.027312321 +0000 UTC m=+764.151674545" Dec 02 14:41:44 crc kubenswrapper[4814]: I1202 14:41:44.131905 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:44 crc kubenswrapper[4814]: I1202 14:41:44.172902 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:49 crc kubenswrapper[4814]: I1202 14:41:49.275287 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-4vgbp" Dec 02 14:41:49 crc kubenswrapper[4814]: I1202 14:41:49.744885 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z95mn" Dec 02 14:41:50 crc kubenswrapper[4814]: I1202 14:41:50.651150 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:41:50 crc kubenswrapper[4814]: I1202 14:41:50.651215 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:41:50 crc kubenswrapper[4814]: I1202 14:41:50.745675 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8g52s" Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.179100 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9"] Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.181324 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.183887 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.227860 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9"] Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.342674 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-845tz\" (UniqueName: \"kubernetes.io/projected/ebcb42b8-8b25-4bff-92d5-f572066a7c53-kube-api-access-845tz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9\" (UID: \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.342816 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebcb42b8-8b25-4bff-92d5-f572066a7c53-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9\" (UID: \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.344289 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebcb42b8-8b25-4bff-92d5-f572066a7c53-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9\" (UID: \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.446620 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-845tz\" (UniqueName: \"kubernetes.io/projected/ebcb42b8-8b25-4bff-92d5-f572066a7c53-kube-api-access-845tz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9\" (UID: \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.447131 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebcb42b8-8b25-4bff-92d5-f572066a7c53-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9\" (UID: \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.447163 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebcb42b8-8b25-4bff-92d5-f572066a7c53-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9\" (UID: \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.447831 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebcb42b8-8b25-4bff-92d5-f572066a7c53-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9\" (UID: \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.447845 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebcb42b8-8b25-4bff-92d5-f572066a7c53-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9\" (UID: \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.467829 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-845tz\" (UniqueName: \"kubernetes.io/projected/ebcb42b8-8b25-4bff-92d5-f572066a7c53-kube-api-access-845tz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9\" (UID: \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.524214 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" Dec 02 14:41:52 crc kubenswrapper[4814]: I1202 14:41:52.785484 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9"] Dec 02 14:41:53 crc kubenswrapper[4814]: I1202 14:41:53.590090 4814 generic.go:334] "Generic (PLEG): container finished" podID="ebcb42b8-8b25-4bff-92d5-f572066a7c53" containerID="e95617e20eeccdac0d48b607ad969cb8d5b6d9cbe026a392584519ce57780455" exitCode=0 Dec 02 14:41:53 crc kubenswrapper[4814]: I1202 14:41:53.590138 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" event={"ID":"ebcb42b8-8b25-4bff-92d5-f572066a7c53","Type":"ContainerDied","Data":"e95617e20eeccdac0d48b607ad969cb8d5b6d9cbe026a392584519ce57780455"} Dec 02 14:41:53 crc kubenswrapper[4814]: I1202 14:41:53.590168 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" event={"ID":"ebcb42b8-8b25-4bff-92d5-f572066a7c53","Type":"ContainerStarted","Data":"ed83daac585f82bc3b33c6dcdb7f1227c28dbe85dfa6c8fde34d9648f9706e41"} Dec 02 14:41:54 crc kubenswrapper[4814]: I1202 14:41:54.516380 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q2wrv"] Dec 02 14:41:54 crc kubenswrapper[4814]: I1202 14:41:54.518630 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:41:54 crc kubenswrapper[4814]: I1202 14:41:54.532572 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2wrv"] Dec 02 14:41:54 crc kubenswrapper[4814]: I1202 14:41:54.691395 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3159f91-ca5c-454c-a434-eac725257498-catalog-content\") pod \"redhat-operators-q2wrv\" (UID: \"a3159f91-ca5c-454c-a434-eac725257498\") " pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:41:54 crc kubenswrapper[4814]: I1202 14:41:54.691532 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3159f91-ca5c-454c-a434-eac725257498-utilities\") pod \"redhat-operators-q2wrv\" (UID: \"a3159f91-ca5c-454c-a434-eac725257498\") " pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:41:54 crc kubenswrapper[4814]: I1202 14:41:54.691719 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzqbr\" (UniqueName: \"kubernetes.io/projected/a3159f91-ca5c-454c-a434-eac725257498-kube-api-access-lzqbr\") pod \"redhat-operators-q2wrv\" (UID: \"a3159f91-ca5c-454c-a434-eac725257498\") " pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:41:54 crc kubenswrapper[4814]: I1202 14:41:54.793055 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3159f91-ca5c-454c-a434-eac725257498-catalog-content\") pod \"redhat-operators-q2wrv\" (UID: \"a3159f91-ca5c-454c-a434-eac725257498\") " pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:41:54 crc kubenswrapper[4814]: I1202 14:41:54.793126 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3159f91-ca5c-454c-a434-eac725257498-utilities\") pod \"redhat-operators-q2wrv\" (UID: \"a3159f91-ca5c-454c-a434-eac725257498\") " pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:41:54 crc kubenswrapper[4814]: I1202 14:41:54.793180 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzqbr\" (UniqueName: \"kubernetes.io/projected/a3159f91-ca5c-454c-a434-eac725257498-kube-api-access-lzqbr\") pod \"redhat-operators-q2wrv\" (UID: \"a3159f91-ca5c-454c-a434-eac725257498\") " pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:41:54 crc kubenswrapper[4814]: I1202 14:41:54.793521 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3159f91-ca5c-454c-a434-eac725257498-catalog-content\") pod \"redhat-operators-q2wrv\" (UID: \"a3159f91-ca5c-454c-a434-eac725257498\") " pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:41:54 crc kubenswrapper[4814]: I1202 14:41:54.793790 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3159f91-ca5c-454c-a434-eac725257498-utilities\") pod \"redhat-operators-q2wrv\" (UID: \"a3159f91-ca5c-454c-a434-eac725257498\") " pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:41:54 crc kubenswrapper[4814]: I1202 14:41:54.812935 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzqbr\" (UniqueName: \"kubernetes.io/projected/a3159f91-ca5c-454c-a434-eac725257498-kube-api-access-lzqbr\") pod \"redhat-operators-q2wrv\" (UID: \"a3159f91-ca5c-454c-a434-eac725257498\") " pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:41:54 crc kubenswrapper[4814]: I1202 14:41:54.839696 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:41:55 crc kubenswrapper[4814]: I1202 14:41:55.270153 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2wrv"] Dec 02 14:41:55 crc kubenswrapper[4814]: W1202 14:41:55.282101 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3159f91_ca5c_454c_a434_eac725257498.slice/crio-f924358eca10c9e52fcf4b5a52f7fb516d3aeb461a44747db9872bca0214b343 WatchSource:0}: Error finding container f924358eca10c9e52fcf4b5a52f7fb516d3aeb461a44747db9872bca0214b343: Status 404 returned error can't find the container with id f924358eca10c9e52fcf4b5a52f7fb516d3aeb461a44747db9872bca0214b343 Dec 02 14:41:55 crc kubenswrapper[4814]: I1202 14:41:55.604011 4814 generic.go:334] "Generic (PLEG): container finished" podID="a3159f91-ca5c-454c-a434-eac725257498" containerID="11c8846fc0b8dcf953c175e1cea51be1e67a04b3d855ae6f01520c020152b7ef" exitCode=0 Dec 02 14:41:55 crc kubenswrapper[4814]: I1202 14:41:55.604063 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2wrv" event={"ID":"a3159f91-ca5c-454c-a434-eac725257498","Type":"ContainerDied","Data":"11c8846fc0b8dcf953c175e1cea51be1e67a04b3d855ae6f01520c020152b7ef"} Dec 02 14:41:55 crc kubenswrapper[4814]: I1202 14:41:55.604088 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2wrv" event={"ID":"a3159f91-ca5c-454c-a434-eac725257498","Type":"ContainerStarted","Data":"f924358eca10c9e52fcf4b5a52f7fb516d3aeb461a44747db9872bca0214b343"} Dec 02 14:41:58 crc kubenswrapper[4814]: I1202 14:41:58.625687 4814 generic.go:334] "Generic (PLEG): container finished" podID="ebcb42b8-8b25-4bff-92d5-f572066a7c53" containerID="e264351819e21b7b5dede87acc7a1da7c034473942079ee53e2c2b830af968ea" exitCode=0 Dec 02 14:41:58 crc kubenswrapper[4814]: I1202 14:41:58.625730 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" event={"ID":"ebcb42b8-8b25-4bff-92d5-f572066a7c53","Type":"ContainerDied","Data":"e264351819e21b7b5dede87acc7a1da7c034473942079ee53e2c2b830af968ea"} Dec 02 14:41:58 crc kubenswrapper[4814]: I1202 14:41:58.634795 4814 generic.go:334] "Generic (PLEG): container finished" podID="a3159f91-ca5c-454c-a434-eac725257498" containerID="4522a50211e550f84fc866b5b6a0e4db0bd0224d1846c7d696da4a05cf6fbd6f" exitCode=0 Dec 02 14:41:58 crc kubenswrapper[4814]: I1202 14:41:58.634844 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2wrv" event={"ID":"a3159f91-ca5c-454c-a434-eac725257498","Type":"ContainerDied","Data":"4522a50211e550f84fc866b5b6a0e4db0bd0224d1846c7d696da4a05cf6fbd6f"} Dec 02 14:41:59 crc kubenswrapper[4814]: I1202 14:41:59.135425 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-h5cxr" Dec 02 14:41:59 crc kubenswrapper[4814]: I1202 14:41:59.641987 4814 generic.go:334] "Generic (PLEG): container finished" podID="ebcb42b8-8b25-4bff-92d5-f572066a7c53" containerID="f7973f57ec1a5d8b53a0232016e094cbce15230e1cf3e47312ddcb8f11fff751" exitCode=0 Dec 02 14:41:59 crc kubenswrapper[4814]: I1202 14:41:59.642029 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" event={"ID":"ebcb42b8-8b25-4bff-92d5-f572066a7c53","Type":"ContainerDied","Data":"f7973f57ec1a5d8b53a0232016e094cbce15230e1cf3e47312ddcb8f11fff751"} Dec 02 14:41:59 crc kubenswrapper[4814]: I1202 14:41:59.644147 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2wrv" event={"ID":"a3159f91-ca5c-454c-a434-eac725257498","Type":"ContainerStarted","Data":"7a85531e0d0e1d7d7c7d6088a9dd688cec33ccf0620159b1dffc7eca40da11ca"} Dec 02 14:41:59 crc kubenswrapper[4814]: I1202 14:41:59.673793 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q2wrv" podStartSLOduration=1.970379834 podStartE2EDuration="5.673773669s" podCreationTimestamp="2025-12-02 14:41:54 +0000 UTC" firstStartedPulling="2025-12-02 14:41:55.607336628 +0000 UTC m=+777.731698832" lastFinishedPulling="2025-12-02 14:41:59.310730463 +0000 UTC m=+781.435092667" observedRunningTime="2025-12-02 14:41:59.671823453 +0000 UTC m=+781.796185647" watchObservedRunningTime="2025-12-02 14:41:59.673773669 +0000 UTC m=+781.798135873" Dec 02 14:42:00 crc kubenswrapper[4814]: I1202 14:42:00.942434 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" Dec 02 14:42:01 crc kubenswrapper[4814]: I1202 14:42:01.086936 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebcb42b8-8b25-4bff-92d5-f572066a7c53-bundle\") pod \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\" (UID: \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\") " Dec 02 14:42:01 crc kubenswrapper[4814]: I1202 14:42:01.087049 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-845tz\" (UniqueName: \"kubernetes.io/projected/ebcb42b8-8b25-4bff-92d5-f572066a7c53-kube-api-access-845tz\") pod \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\" (UID: \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\") " Dec 02 14:42:01 crc kubenswrapper[4814]: I1202 14:42:01.087099 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebcb42b8-8b25-4bff-92d5-f572066a7c53-util\") pod \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\" (UID: \"ebcb42b8-8b25-4bff-92d5-f572066a7c53\") " Dec 02 14:42:01 crc kubenswrapper[4814]: I1202 14:42:01.089288 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebcb42b8-8b25-4bff-92d5-f572066a7c53-bundle" (OuterVolumeSpecName: "bundle") pod "ebcb42b8-8b25-4bff-92d5-f572066a7c53" (UID: "ebcb42b8-8b25-4bff-92d5-f572066a7c53"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:42:01 crc kubenswrapper[4814]: I1202 14:42:01.100040 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebcb42b8-8b25-4bff-92d5-f572066a7c53-kube-api-access-845tz" (OuterVolumeSpecName: "kube-api-access-845tz") pod "ebcb42b8-8b25-4bff-92d5-f572066a7c53" (UID: "ebcb42b8-8b25-4bff-92d5-f572066a7c53"). InnerVolumeSpecName "kube-api-access-845tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:42:01 crc kubenswrapper[4814]: I1202 14:42:01.101962 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebcb42b8-8b25-4bff-92d5-f572066a7c53-util" (OuterVolumeSpecName: "util") pod "ebcb42b8-8b25-4bff-92d5-f572066a7c53" (UID: "ebcb42b8-8b25-4bff-92d5-f572066a7c53"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:42:01 crc kubenswrapper[4814]: I1202 14:42:01.188283 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-845tz\" (UniqueName: \"kubernetes.io/projected/ebcb42b8-8b25-4bff-92d5-f572066a7c53-kube-api-access-845tz\") on node \"crc\" DevicePath \"\"" Dec 02 14:42:01 crc kubenswrapper[4814]: I1202 14:42:01.188327 4814 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebcb42b8-8b25-4bff-92d5-f572066a7c53-util\") on node \"crc\" DevicePath \"\"" Dec 02 14:42:01 crc kubenswrapper[4814]: I1202 14:42:01.188340 4814 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebcb42b8-8b25-4bff-92d5-f572066a7c53-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:42:01 crc kubenswrapper[4814]: I1202 14:42:01.664655 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" event={"ID":"ebcb42b8-8b25-4bff-92d5-f572066a7c53","Type":"ContainerDied","Data":"ed83daac585f82bc3b33c6dcdb7f1227c28dbe85dfa6c8fde34d9648f9706e41"} Dec 02 14:42:01 crc kubenswrapper[4814]: I1202 14:42:01.664692 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed83daac585f82bc3b33c6dcdb7f1227c28dbe85dfa6c8fde34d9648f9706e41" Dec 02 14:42:01 crc kubenswrapper[4814]: I1202 14:42:01.664785 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9" Dec 02 14:42:04 crc kubenswrapper[4814]: I1202 14:42:04.840126 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:42:04 crc kubenswrapper[4814]: I1202 14:42:04.840395 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.252884 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-595k7"] Dec 02 14:42:05 crc kubenswrapper[4814]: E1202 14:42:05.253537 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcb42b8-8b25-4bff-92d5-f572066a7c53" containerName="pull" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.253561 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcb42b8-8b25-4bff-92d5-f572066a7c53" containerName="pull" Dec 02 14:42:05 crc kubenswrapper[4814]: E1202 14:42:05.253576 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcb42b8-8b25-4bff-92d5-f572066a7c53" containerName="extract" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.253585 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcb42b8-8b25-4bff-92d5-f572066a7c53" containerName="extract" Dec 02 14:42:05 crc kubenswrapper[4814]: E1202 14:42:05.253604 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcb42b8-8b25-4bff-92d5-f572066a7c53" containerName="util" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.253612 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcb42b8-8b25-4bff-92d5-f572066a7c53" containerName="util" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.253752 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebcb42b8-8b25-4bff-92d5-f572066a7c53" containerName="extract" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.254310 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-595k7" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.257976 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.258222 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.258452 4814 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-qscm2" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.272853 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-595k7"] Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.344859 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/95e7696d-7ff3-4cf8-a7a5-734045861a8c-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-595k7\" (UID: \"95e7696d-7ff3-4cf8-a7a5-734045861a8c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-595k7" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.345169 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dklsd\" (UniqueName: \"kubernetes.io/projected/95e7696d-7ff3-4cf8-a7a5-734045861a8c-kube-api-access-dklsd\") pod \"cert-manager-operator-controller-manager-64cf6dff88-595k7\" (UID: \"95e7696d-7ff3-4cf8-a7a5-734045861a8c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-595k7" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.446892 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/95e7696d-7ff3-4cf8-a7a5-734045861a8c-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-595k7\" (UID: \"95e7696d-7ff3-4cf8-a7a5-734045861a8c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-595k7" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.446994 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dklsd\" (UniqueName: \"kubernetes.io/projected/95e7696d-7ff3-4cf8-a7a5-734045861a8c-kube-api-access-dklsd\") pod \"cert-manager-operator-controller-manager-64cf6dff88-595k7\" (UID: \"95e7696d-7ff3-4cf8-a7a5-734045861a8c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-595k7" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.447858 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/95e7696d-7ff3-4cf8-a7a5-734045861a8c-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-595k7\" (UID: \"95e7696d-7ff3-4cf8-a7a5-734045861a8c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-595k7" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.468495 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dklsd\" (UniqueName: \"kubernetes.io/projected/95e7696d-7ff3-4cf8-a7a5-734045861a8c-kube-api-access-dklsd\") pod \"cert-manager-operator-controller-manager-64cf6dff88-595k7\" (UID: \"95e7696d-7ff3-4cf8-a7a5-734045861a8c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-595k7" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.655439 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-595k7" Dec 02 14:42:05 crc kubenswrapper[4814]: I1202 14:42:05.884941 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q2wrv" podUID="a3159f91-ca5c-454c-a434-eac725257498" containerName="registry-server" probeResult="failure" output=< Dec 02 14:42:05 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Dec 02 14:42:05 crc kubenswrapper[4814]: > Dec 02 14:42:06 crc kubenswrapper[4814]: I1202 14:42:06.098749 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-595k7"] Dec 02 14:42:06 crc kubenswrapper[4814]: W1202 14:42:06.109403 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95e7696d_7ff3_4cf8_a7a5_734045861a8c.slice/crio-0d4da051d6be68849c2bdf9a4e115953d7d32ca1bbebc90a3df57639fc28e0d9 WatchSource:0}: Error finding container 0d4da051d6be68849c2bdf9a4e115953d7d32ca1bbebc90a3df57639fc28e0d9: Status 404 returned error can't find the container with id 0d4da051d6be68849c2bdf9a4e115953d7d32ca1bbebc90a3df57639fc28e0d9 Dec 02 14:42:06 crc kubenswrapper[4814]: I1202 14:42:06.714428 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-595k7" event={"ID":"95e7696d-7ff3-4cf8-a7a5-734045861a8c","Type":"ContainerStarted","Data":"0d4da051d6be68849c2bdf9a4e115953d7d32ca1bbebc90a3df57639fc28e0d9"} Dec 02 14:42:11 crc kubenswrapper[4814]: I1202 14:42:11.752969 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-595k7" event={"ID":"95e7696d-7ff3-4cf8-a7a5-734045861a8c","Type":"ContainerStarted","Data":"87b83565846d142f5980780b6b0f819c83cea020338901a9b66ac32bbd193525"} Dec 02 14:42:14 crc kubenswrapper[4814]: I1202 14:42:14.900846 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:42:14 crc kubenswrapper[4814]: I1202 14:42:14.949896 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-595k7" podStartSLOduration=5.106582787 podStartE2EDuration="9.949881652s" podCreationTimestamp="2025-12-02 14:42:05 +0000 UTC" firstStartedPulling="2025-12-02 14:42:06.11745108 +0000 UTC m=+788.241813284" lastFinishedPulling="2025-12-02 14:42:10.960749945 +0000 UTC m=+793.085112149" observedRunningTime="2025-12-02 14:42:11.775542784 +0000 UTC m=+793.899904988" watchObservedRunningTime="2025-12-02 14:42:14.949881652 +0000 UTC m=+797.074243856" Dec 02 14:42:14 crc kubenswrapper[4814]: I1202 14:42:14.987268 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.022900 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-xsb4g"] Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.023932 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-xsb4g" Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.028443 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.028547 4814 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-tzfrk" Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.028461 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.108739 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-xsb4g"] Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.188886 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0603f2e1-9323-40a6-abf1-8980f21a707d-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-xsb4g\" (UID: \"0603f2e1-9323-40a6-abf1-8980f21a707d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-xsb4g" Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.189025 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxzqp\" (UniqueName: \"kubernetes.io/projected/0603f2e1-9323-40a6-abf1-8980f21a707d-kube-api-access-wxzqp\") pod \"cert-manager-webhook-f4fb5df64-xsb4g\" (UID: \"0603f2e1-9323-40a6-abf1-8980f21a707d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-xsb4g" Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.290428 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxzqp\" (UniqueName: \"kubernetes.io/projected/0603f2e1-9323-40a6-abf1-8980f21a707d-kube-api-access-wxzqp\") pod \"cert-manager-webhook-f4fb5df64-xsb4g\" (UID: \"0603f2e1-9323-40a6-abf1-8980f21a707d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-xsb4g" Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.290496 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0603f2e1-9323-40a6-abf1-8980f21a707d-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-xsb4g\" (UID: \"0603f2e1-9323-40a6-abf1-8980f21a707d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-xsb4g" Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.308255 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0603f2e1-9323-40a6-abf1-8980f21a707d-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-xsb4g\" (UID: \"0603f2e1-9323-40a6-abf1-8980f21a707d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-xsb4g" Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.308996 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxzqp\" (UniqueName: \"kubernetes.io/projected/0603f2e1-9323-40a6-abf1-8980f21a707d-kube-api-access-wxzqp\") pod \"cert-manager-webhook-f4fb5df64-xsb4g\" (UID: \"0603f2e1-9323-40a6-abf1-8980f21a707d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-xsb4g" Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.347817 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-xsb4g" Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.547654 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-xsb4g"] Dec 02 14:42:15 crc kubenswrapper[4814]: W1202 14:42:15.562906 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0603f2e1_9323_40a6_abf1_8980f21a707d.slice/crio-edf548e512aa1f0a8155d7ef02dc8ff9747618824eb40cb4a5723f28f4805053 WatchSource:0}: Error finding container edf548e512aa1f0a8155d7ef02dc8ff9747618824eb40cb4a5723f28f4805053: Status 404 returned error can't find the container with id edf548e512aa1f0a8155d7ef02dc8ff9747618824eb40cb4a5723f28f4805053 Dec 02 14:42:15 crc kubenswrapper[4814]: I1202 14:42:15.777422 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-xsb4g" event={"ID":"0603f2e1-9323-40a6-abf1-8980f21a707d","Type":"ContainerStarted","Data":"edf548e512aa1f0a8155d7ef02dc8ff9747618824eb40cb4a5723f28f4805053"} Dec 02 14:42:17 crc kubenswrapper[4814]: I1202 14:42:17.331153 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2wrv"] Dec 02 14:42:17 crc kubenswrapper[4814]: I1202 14:42:17.331601 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q2wrv" podUID="a3159f91-ca5c-454c-a434-eac725257498" containerName="registry-server" containerID="cri-o://7a85531e0d0e1d7d7c7d6088a9dd688cec33ccf0620159b1dffc7eca40da11ca" gracePeriod=2 Dec 02 14:42:17 crc kubenswrapper[4814]: I1202 14:42:17.761258 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-hkrtx"] Dec 02 14:42:17 crc kubenswrapper[4814]: I1202 14:42:17.762513 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hkrtx" Dec 02 14:42:17 crc kubenswrapper[4814]: I1202 14:42:17.765092 4814 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-g75bb" Dec 02 14:42:17 crc kubenswrapper[4814]: I1202 14:42:17.784080 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-hkrtx"] Dec 02 14:42:17 crc kubenswrapper[4814]: I1202 14:42:17.802224 4814 generic.go:334] "Generic (PLEG): container finished" podID="a3159f91-ca5c-454c-a434-eac725257498" containerID="7a85531e0d0e1d7d7c7d6088a9dd688cec33ccf0620159b1dffc7eca40da11ca" exitCode=0 Dec 02 14:42:17 crc kubenswrapper[4814]: I1202 14:42:17.802296 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2wrv" event={"ID":"a3159f91-ca5c-454c-a434-eac725257498","Type":"ContainerDied","Data":"7a85531e0d0e1d7d7c7d6088a9dd688cec33ccf0620159b1dffc7eca40da11ca"} Dec 02 14:42:17 crc kubenswrapper[4814]: I1202 14:42:17.837461 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86l6k\" (UniqueName: \"kubernetes.io/projected/77039ccf-9957-4593-bba5-4c1c89bcf793-kube-api-access-86l6k\") pod \"cert-manager-cainjector-855d9ccff4-hkrtx\" (UID: \"77039ccf-9957-4593-bba5-4c1c89bcf793\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hkrtx" Dec 02 14:42:17 crc kubenswrapper[4814]: I1202 14:42:17.837627 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77039ccf-9957-4593-bba5-4c1c89bcf793-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-hkrtx\" (UID: \"77039ccf-9957-4593-bba5-4c1c89bcf793\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hkrtx" Dec 02 14:42:17 crc kubenswrapper[4814]: I1202 14:42:17.938667 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86l6k\" (UniqueName: \"kubernetes.io/projected/77039ccf-9957-4593-bba5-4c1c89bcf793-kube-api-access-86l6k\") pod \"cert-manager-cainjector-855d9ccff4-hkrtx\" (UID: \"77039ccf-9957-4593-bba5-4c1c89bcf793\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hkrtx" Dec 02 14:42:17 crc kubenswrapper[4814]: I1202 14:42:17.938743 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77039ccf-9957-4593-bba5-4c1c89bcf793-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-hkrtx\" (UID: \"77039ccf-9957-4593-bba5-4c1c89bcf793\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hkrtx" Dec 02 14:42:17 crc kubenswrapper[4814]: I1202 14:42:17.960129 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86l6k\" (UniqueName: \"kubernetes.io/projected/77039ccf-9957-4593-bba5-4c1c89bcf793-kube-api-access-86l6k\") pod \"cert-manager-cainjector-855d9ccff4-hkrtx\" (UID: \"77039ccf-9957-4593-bba5-4c1c89bcf793\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hkrtx" Dec 02 14:42:17 crc kubenswrapper[4814]: I1202 14:42:17.967953 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77039ccf-9957-4593-bba5-4c1c89bcf793-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-hkrtx\" (UID: \"77039ccf-9957-4593-bba5-4c1c89bcf793\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hkrtx" Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.078899 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hkrtx" Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.299382 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.343940 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzqbr\" (UniqueName: \"kubernetes.io/projected/a3159f91-ca5c-454c-a434-eac725257498-kube-api-access-lzqbr\") pod \"a3159f91-ca5c-454c-a434-eac725257498\" (UID: \"a3159f91-ca5c-454c-a434-eac725257498\") " Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.344285 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3159f91-ca5c-454c-a434-eac725257498-catalog-content\") pod \"a3159f91-ca5c-454c-a434-eac725257498\" (UID: \"a3159f91-ca5c-454c-a434-eac725257498\") " Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.344378 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3159f91-ca5c-454c-a434-eac725257498-utilities\") pod \"a3159f91-ca5c-454c-a434-eac725257498\" (UID: \"a3159f91-ca5c-454c-a434-eac725257498\") " Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.345411 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3159f91-ca5c-454c-a434-eac725257498-utilities" (OuterVolumeSpecName: "utilities") pod "a3159f91-ca5c-454c-a434-eac725257498" (UID: "a3159f91-ca5c-454c-a434-eac725257498"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.348878 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3159f91-ca5c-454c-a434-eac725257498-kube-api-access-lzqbr" (OuterVolumeSpecName: "kube-api-access-lzqbr") pod "a3159f91-ca5c-454c-a434-eac725257498" (UID: "a3159f91-ca5c-454c-a434-eac725257498"). InnerVolumeSpecName "kube-api-access-lzqbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.445563 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3159f91-ca5c-454c-a434-eac725257498-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.445601 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzqbr\" (UniqueName: \"kubernetes.io/projected/a3159f91-ca5c-454c-a434-eac725257498-kube-api-access-lzqbr\") on node \"crc\" DevicePath \"\"" Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.470018 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3159f91-ca5c-454c-a434-eac725257498-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3159f91-ca5c-454c-a434-eac725257498" (UID: "a3159f91-ca5c-454c-a434-eac725257498"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.514119 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-hkrtx"] Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.547700 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3159f91-ca5c-454c-a434-eac725257498-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.814417 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hkrtx" event={"ID":"77039ccf-9957-4593-bba5-4c1c89bcf793","Type":"ContainerStarted","Data":"008680ae96dc3bd7713e57a8bbacda95d84cd8ce7518918e4e110b27c5b06ff3"} Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.817189 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2wrv" event={"ID":"a3159f91-ca5c-454c-a434-eac725257498","Type":"ContainerDied","Data":"f924358eca10c9e52fcf4b5a52f7fb516d3aeb461a44747db9872bca0214b343"} Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.817250 4814 scope.go:117] "RemoveContainer" containerID="7a85531e0d0e1d7d7c7d6088a9dd688cec33ccf0620159b1dffc7eca40da11ca" Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.817302 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2wrv" Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.838859 4814 scope.go:117] "RemoveContainer" containerID="4522a50211e550f84fc866b5b6a0e4db0bd0224d1846c7d696da4a05cf6fbd6f" Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.870372 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2wrv"] Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.872318 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q2wrv"] Dec 02 14:42:18 crc kubenswrapper[4814]: I1202 14:42:18.872708 4814 scope.go:117] "RemoveContainer" containerID="11c8846fc0b8dcf953c175e1cea51be1e67a04b3d855ae6f01520c020152b7ef" Dec 02 14:42:20 crc kubenswrapper[4814]: I1202 14:42:20.159971 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3159f91-ca5c-454c-a434-eac725257498" path="/var/lib/kubelet/pods/a3159f91-ca5c-454c-a434-eac725257498/volumes" Dec 02 14:42:20 crc kubenswrapper[4814]: I1202 14:42:20.651082 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:42:20 crc kubenswrapper[4814]: I1202 14:42:20.651196 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:42:20 crc kubenswrapper[4814]: I1202 14:42:20.651295 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:42:20 crc kubenswrapper[4814]: I1202 14:42:20.652394 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1ed9f1bb911b9ac82f3b2b6f82d455cb053025789269a61268761d5183751b3"} pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:42:20 crc kubenswrapper[4814]: I1202 14:42:20.652475 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" containerID="cri-o://a1ed9f1bb911b9ac82f3b2b6f82d455cb053025789269a61268761d5183751b3" gracePeriod=600 Dec 02 14:42:21 crc kubenswrapper[4814]: I1202 14:42:21.843178 4814 generic.go:334] "Generic (PLEG): container finished" podID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerID="a1ed9f1bb911b9ac82f3b2b6f82d455cb053025789269a61268761d5183751b3" exitCode=0 Dec 02 14:42:21 crc kubenswrapper[4814]: I1202 14:42:21.843268 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerDied","Data":"a1ed9f1bb911b9ac82f3b2b6f82d455cb053025789269a61268761d5183751b3"} Dec 02 14:42:21 crc kubenswrapper[4814]: I1202 14:42:21.843475 4814 scope.go:117] "RemoveContainer" containerID="2845690d7fa975a6fc9bee687fba5b5a81fdaa565ae0b556f8b2ea0b2d8f3b40" Dec 02 14:42:25 crc kubenswrapper[4814]: I1202 14:42:25.889858 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-xsb4g" event={"ID":"0603f2e1-9323-40a6-abf1-8980f21a707d","Type":"ContainerStarted","Data":"5194110e975654a3243481c5944553c91156865c28e48968112fe42d357a2881"} Dec 02 14:42:25 crc kubenswrapper[4814]: I1202 14:42:25.890406 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-xsb4g" Dec 02 14:42:25 crc kubenswrapper[4814]: I1202 14:42:25.895267 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"997855c695d121643798ebc9fbdd2b1a6b7bdb08460863ea230c98ff92c4714f"} Dec 02 14:42:25 crc kubenswrapper[4814]: I1202 14:42:25.896444 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hkrtx" event={"ID":"77039ccf-9957-4593-bba5-4c1c89bcf793","Type":"ContainerStarted","Data":"2f78d0706817faa7963960e1ba95ea69082d7f586850e1849f3c0ce7cb21bf97"} Dec 02 14:42:25 crc kubenswrapper[4814]: I1202 14:42:25.914573 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-xsb4g" podStartSLOduration=2.560786476 podStartE2EDuration="11.914550106s" podCreationTimestamp="2025-12-02 14:42:14 +0000 UTC" firstStartedPulling="2025-12-02 14:42:15.565885654 +0000 UTC m=+797.690247858" lastFinishedPulling="2025-12-02 14:42:24.919649284 +0000 UTC m=+807.044011488" observedRunningTime="2025-12-02 14:42:25.906011265 +0000 UTC m=+808.030373499" watchObservedRunningTime="2025-12-02 14:42:25.914550106 +0000 UTC m=+808.038912320" Dec 02 14:42:25 crc kubenswrapper[4814]: I1202 14:42:25.931584 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hkrtx" podStartSLOduration=2.563843949 podStartE2EDuration="8.931561498s" podCreationTimestamp="2025-12-02 14:42:17 +0000 UTC" firstStartedPulling="2025-12-02 14:42:18.535593999 +0000 UTC m=+800.659956203" lastFinishedPulling="2025-12-02 14:42:24.903311538 +0000 UTC m=+807.027673752" observedRunningTime="2025-12-02 14:42:25.927649416 +0000 UTC m=+808.052011640" watchObservedRunningTime="2025-12-02 14:42:25.931561498 +0000 UTC m=+808.055923722" Dec 02 14:42:30 crc kubenswrapper[4814]: I1202 14:42:30.350695 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-xsb4g" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.210659 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-gss2c"] Dec 02 14:42:34 crc kubenswrapper[4814]: E1202 14:42:34.211571 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3159f91-ca5c-454c-a434-eac725257498" containerName="extract-utilities" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.211594 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3159f91-ca5c-454c-a434-eac725257498" containerName="extract-utilities" Dec 02 14:42:34 crc kubenswrapper[4814]: E1202 14:42:34.211626 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3159f91-ca5c-454c-a434-eac725257498" containerName="extract-content" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.211641 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3159f91-ca5c-454c-a434-eac725257498" containerName="extract-content" Dec 02 14:42:34 crc kubenswrapper[4814]: E1202 14:42:34.211658 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3159f91-ca5c-454c-a434-eac725257498" containerName="registry-server" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.211671 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3159f91-ca5c-454c-a434-eac725257498" containerName="registry-server" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.211872 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3159f91-ca5c-454c-a434-eac725257498" containerName="registry-server" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.212568 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-gss2c" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.215226 4814 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-l8jpm" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.224487 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-gss2c"] Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.392116 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsdmg\" (UniqueName: \"kubernetes.io/projected/7953c8b5-72f8-4277-b481-eb5d59a88de4-kube-api-access-zsdmg\") pod \"cert-manager-86cb77c54b-gss2c\" (UID: \"7953c8b5-72f8-4277-b481-eb5d59a88de4\") " pod="cert-manager/cert-manager-86cb77c54b-gss2c" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.392216 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7953c8b5-72f8-4277-b481-eb5d59a88de4-bound-sa-token\") pod \"cert-manager-86cb77c54b-gss2c\" (UID: \"7953c8b5-72f8-4277-b481-eb5d59a88de4\") " pod="cert-manager/cert-manager-86cb77c54b-gss2c" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.493422 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsdmg\" (UniqueName: \"kubernetes.io/projected/7953c8b5-72f8-4277-b481-eb5d59a88de4-kube-api-access-zsdmg\") pod \"cert-manager-86cb77c54b-gss2c\" (UID: \"7953c8b5-72f8-4277-b481-eb5d59a88de4\") " pod="cert-manager/cert-manager-86cb77c54b-gss2c" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.493525 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7953c8b5-72f8-4277-b481-eb5d59a88de4-bound-sa-token\") pod \"cert-manager-86cb77c54b-gss2c\" (UID: \"7953c8b5-72f8-4277-b481-eb5d59a88de4\") " pod="cert-manager/cert-manager-86cb77c54b-gss2c" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.511305 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7953c8b5-72f8-4277-b481-eb5d59a88de4-bound-sa-token\") pod \"cert-manager-86cb77c54b-gss2c\" (UID: \"7953c8b5-72f8-4277-b481-eb5d59a88de4\") " pod="cert-manager/cert-manager-86cb77c54b-gss2c" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.512064 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsdmg\" (UniqueName: \"kubernetes.io/projected/7953c8b5-72f8-4277-b481-eb5d59a88de4-kube-api-access-zsdmg\") pod \"cert-manager-86cb77c54b-gss2c\" (UID: \"7953c8b5-72f8-4277-b481-eb5d59a88de4\") " pod="cert-manager/cert-manager-86cb77c54b-gss2c" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.535352 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-gss2c" Dec 02 14:42:34 crc kubenswrapper[4814]: I1202 14:42:34.968534 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-gss2c"] Dec 02 14:42:35 crc kubenswrapper[4814]: I1202 14:42:35.982562 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-gss2c" event={"ID":"7953c8b5-72f8-4277-b481-eb5d59a88de4","Type":"ContainerStarted","Data":"d1e4f28b3d6bacbb15235733f8fdef56a8f3792fb5fd9b529c9e751851513f1b"} Dec 02 14:42:35 crc kubenswrapper[4814]: I1202 14:42:35.982876 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-gss2c" event={"ID":"7953c8b5-72f8-4277-b481-eb5d59a88de4","Type":"ContainerStarted","Data":"c123166c75032183b50cb99a972b8c791c60c74c2bd5908bec352cce2625cf14"} Dec 02 14:42:36 crc kubenswrapper[4814]: I1202 14:42:36.007129 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-gss2c" podStartSLOduration=2.007099907 podStartE2EDuration="2.007099907s" podCreationTimestamp="2025-12-02 14:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:42:36.002553389 +0000 UTC m=+818.126915623" watchObservedRunningTime="2025-12-02 14:42:36.007099907 +0000 UTC m=+818.131462151" Dec 02 14:42:44 crc kubenswrapper[4814]: I1202 14:42:44.095363 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-k2f5m"] Dec 02 14:42:44 crc kubenswrapper[4814]: I1202 14:42:44.104147 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k2f5m" Dec 02 14:42:44 crc kubenswrapper[4814]: I1202 14:42:44.110538 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 02 14:42:44 crc kubenswrapper[4814]: I1202 14:42:44.110617 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 02 14:42:44 crc kubenswrapper[4814]: I1202 14:42:44.127905 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k2f5m"] Dec 02 14:42:44 crc kubenswrapper[4814]: I1202 14:42:44.270832 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th7lq\" (UniqueName: \"kubernetes.io/projected/5f529308-d964-4fae-a3d4-b5d19551bca2-kube-api-access-th7lq\") pod \"openstack-operator-index-k2f5m\" (UID: \"5f529308-d964-4fae-a3d4-b5d19551bca2\") " pod="openstack-operators/openstack-operator-index-k2f5m" Dec 02 14:42:44 crc kubenswrapper[4814]: I1202 14:42:44.372225 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th7lq\" (UniqueName: \"kubernetes.io/projected/5f529308-d964-4fae-a3d4-b5d19551bca2-kube-api-access-th7lq\") pod \"openstack-operator-index-k2f5m\" (UID: \"5f529308-d964-4fae-a3d4-b5d19551bca2\") " pod="openstack-operators/openstack-operator-index-k2f5m" Dec 02 14:42:44 crc kubenswrapper[4814]: I1202 14:42:44.389991 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th7lq\" (UniqueName: \"kubernetes.io/projected/5f529308-d964-4fae-a3d4-b5d19551bca2-kube-api-access-th7lq\") pod \"openstack-operator-index-k2f5m\" (UID: \"5f529308-d964-4fae-a3d4-b5d19551bca2\") " pod="openstack-operators/openstack-operator-index-k2f5m" Dec 02 14:42:44 crc kubenswrapper[4814]: I1202 14:42:44.431109 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k2f5m" Dec 02 14:42:44 crc kubenswrapper[4814]: I1202 14:42:44.884592 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k2f5m"] Dec 02 14:42:44 crc kubenswrapper[4814]: W1202 14:42:44.898258 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f529308_d964_4fae_a3d4_b5d19551bca2.slice/crio-9c684deb7dfe60c3d7c838d19390de0b85979fe3afc1bd8f36cc13c451306c09 WatchSource:0}: Error finding container 9c684deb7dfe60c3d7c838d19390de0b85979fe3afc1bd8f36cc13c451306c09: Status 404 returned error can't find the container with id 9c684deb7dfe60c3d7c838d19390de0b85979fe3afc1bd8f36cc13c451306c09 Dec 02 14:42:45 crc kubenswrapper[4814]: I1202 14:42:45.054841 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k2f5m" event={"ID":"5f529308-d964-4fae-a3d4-b5d19551bca2","Type":"ContainerStarted","Data":"9c684deb7dfe60c3d7c838d19390de0b85979fe3afc1bd8f36cc13c451306c09"} Dec 02 14:42:47 crc kubenswrapper[4814]: I1202 14:42:47.468286 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-k2f5m"] Dec 02 14:42:48 crc kubenswrapper[4814]: I1202 14:42:48.078649 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vbb99"] Dec 02 14:42:48 crc kubenswrapper[4814]: I1202 14:42:48.079746 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vbb99" Dec 02 14:42:48 crc kubenswrapper[4814]: I1202 14:42:48.085581 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-k4m2k" Dec 02 14:42:48 crc kubenswrapper[4814]: I1202 14:42:48.094490 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vbb99"] Dec 02 14:42:48 crc kubenswrapper[4814]: I1202 14:42:48.239118 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9j5v\" (UniqueName: \"kubernetes.io/projected/6ff033a3-1047-4ee3-904b-1e6845244e16-kube-api-access-z9j5v\") pod \"openstack-operator-index-vbb99\" (UID: \"6ff033a3-1047-4ee3-904b-1e6845244e16\") " pod="openstack-operators/openstack-operator-index-vbb99" Dec 02 14:42:48 crc kubenswrapper[4814]: I1202 14:42:48.344155 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9j5v\" (UniqueName: \"kubernetes.io/projected/6ff033a3-1047-4ee3-904b-1e6845244e16-kube-api-access-z9j5v\") pod \"openstack-operator-index-vbb99\" (UID: \"6ff033a3-1047-4ee3-904b-1e6845244e16\") " pod="openstack-operators/openstack-operator-index-vbb99" Dec 02 14:42:48 crc kubenswrapper[4814]: I1202 14:42:48.370848 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9j5v\" (UniqueName: \"kubernetes.io/projected/6ff033a3-1047-4ee3-904b-1e6845244e16-kube-api-access-z9j5v\") pod \"openstack-operator-index-vbb99\" (UID: \"6ff033a3-1047-4ee3-904b-1e6845244e16\") " pod="openstack-operators/openstack-operator-index-vbb99" Dec 02 14:42:48 crc kubenswrapper[4814]: I1202 14:42:48.403439 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vbb99" Dec 02 14:42:49 crc kubenswrapper[4814]: I1202 14:42:49.027123 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vbb99"] Dec 02 14:42:50 crc kubenswrapper[4814]: I1202 14:42:50.093788 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vbb99" event={"ID":"6ff033a3-1047-4ee3-904b-1e6845244e16","Type":"ContainerStarted","Data":"3c49c259d9777f7d1caf6385f73998766a9262b1a780bf2dba38c2444ee828d8"} Dec 02 14:42:51 crc kubenswrapper[4814]: I1202 14:42:51.105610 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k2f5m" event={"ID":"5f529308-d964-4fae-a3d4-b5d19551bca2","Type":"ContainerStarted","Data":"b570b0439101481b529ac5e60bc898ab84426ea4d6f3a1a22fff9aef1f0406bd"} Dec 02 14:42:51 crc kubenswrapper[4814]: I1202 14:42:51.105851 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-k2f5m" podUID="5f529308-d964-4fae-a3d4-b5d19551bca2" containerName="registry-server" containerID="cri-o://b570b0439101481b529ac5e60bc898ab84426ea4d6f3a1a22fff9aef1f0406bd" gracePeriod=2 Dec 02 14:42:51 crc kubenswrapper[4814]: I1202 14:42:51.110342 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vbb99" event={"ID":"6ff033a3-1047-4ee3-904b-1e6845244e16","Type":"ContainerStarted","Data":"6da86fc62837a8e89bbeafe6b91587dbd09d116cde35a214ea355221a0e26fc6"} Dec 02 14:42:51 crc kubenswrapper[4814]: I1202 14:42:51.130212 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-k2f5m" podStartSLOduration=2.065808807 podStartE2EDuration="7.130188986s" podCreationTimestamp="2025-12-02 14:42:44 +0000 UTC" firstStartedPulling="2025-12-02 14:42:44.901282728 +0000 UTC m=+827.025644932" lastFinishedPulling="2025-12-02 14:42:49.965662867 +0000 UTC m=+832.090025111" observedRunningTime="2025-12-02 14:42:51.124182134 +0000 UTC m=+833.248544348" watchObservedRunningTime="2025-12-02 14:42:51.130188986 +0000 UTC m=+833.254551200" Dec 02 14:42:51 crc kubenswrapper[4814]: I1202 14:42:51.145626 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vbb99" podStartSLOduration=2.390469661 podStartE2EDuration="3.1456104s" podCreationTimestamp="2025-12-02 14:42:48 +0000 UTC" firstStartedPulling="2025-12-02 14:42:49.207767092 +0000 UTC m=+831.332129306" lastFinishedPulling="2025-12-02 14:42:49.962907841 +0000 UTC m=+832.087270045" observedRunningTime="2025-12-02 14:42:51.14346708 +0000 UTC m=+833.267829314" watchObservedRunningTime="2025-12-02 14:42:51.1456104 +0000 UTC m=+833.269972604" Dec 02 14:42:52 crc kubenswrapper[4814]: I1202 14:42:52.054041 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k2f5m" Dec 02 14:42:52 crc kubenswrapper[4814]: I1202 14:42:52.118497 4814 generic.go:334] "Generic (PLEG): container finished" podID="5f529308-d964-4fae-a3d4-b5d19551bca2" containerID="b570b0439101481b529ac5e60bc898ab84426ea4d6f3a1a22fff9aef1f0406bd" exitCode=0 Dec 02 14:42:52 crc kubenswrapper[4814]: I1202 14:42:52.118571 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k2f5m" Dec 02 14:42:52 crc kubenswrapper[4814]: I1202 14:42:52.118620 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k2f5m" event={"ID":"5f529308-d964-4fae-a3d4-b5d19551bca2","Type":"ContainerDied","Data":"b570b0439101481b529ac5e60bc898ab84426ea4d6f3a1a22fff9aef1f0406bd"} Dec 02 14:42:52 crc kubenswrapper[4814]: I1202 14:42:52.118662 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k2f5m" event={"ID":"5f529308-d964-4fae-a3d4-b5d19551bca2","Type":"ContainerDied","Data":"9c684deb7dfe60c3d7c838d19390de0b85979fe3afc1bd8f36cc13c451306c09"} Dec 02 14:42:52 crc kubenswrapper[4814]: I1202 14:42:52.118677 4814 scope.go:117] "RemoveContainer" containerID="b570b0439101481b529ac5e60bc898ab84426ea4d6f3a1a22fff9aef1f0406bd" Dec 02 14:42:52 crc kubenswrapper[4814]: I1202 14:42:52.136499 4814 scope.go:117] "RemoveContainer" containerID="b570b0439101481b529ac5e60bc898ab84426ea4d6f3a1a22fff9aef1f0406bd" Dec 02 14:42:52 crc kubenswrapper[4814]: E1202 14:42:52.136867 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b570b0439101481b529ac5e60bc898ab84426ea4d6f3a1a22fff9aef1f0406bd\": container with ID starting with b570b0439101481b529ac5e60bc898ab84426ea4d6f3a1a22fff9aef1f0406bd not found: ID does not exist" containerID="b570b0439101481b529ac5e60bc898ab84426ea4d6f3a1a22fff9aef1f0406bd" Dec 02 14:42:52 crc kubenswrapper[4814]: I1202 14:42:52.136915 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b570b0439101481b529ac5e60bc898ab84426ea4d6f3a1a22fff9aef1f0406bd"} err="failed to get container status \"b570b0439101481b529ac5e60bc898ab84426ea4d6f3a1a22fff9aef1f0406bd\": rpc error: code = NotFound desc = could not find container \"b570b0439101481b529ac5e60bc898ab84426ea4d6f3a1a22fff9aef1f0406bd\": container with ID starting with b570b0439101481b529ac5e60bc898ab84426ea4d6f3a1a22fff9aef1f0406bd not found: ID does not exist" Dec 02 14:42:52 crc kubenswrapper[4814]: I1202 14:42:52.218950 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th7lq\" (UniqueName: \"kubernetes.io/projected/5f529308-d964-4fae-a3d4-b5d19551bca2-kube-api-access-th7lq\") pod \"5f529308-d964-4fae-a3d4-b5d19551bca2\" (UID: \"5f529308-d964-4fae-a3d4-b5d19551bca2\") " Dec 02 14:42:52 crc kubenswrapper[4814]: I1202 14:42:52.230137 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f529308-d964-4fae-a3d4-b5d19551bca2-kube-api-access-th7lq" (OuterVolumeSpecName: "kube-api-access-th7lq") pod "5f529308-d964-4fae-a3d4-b5d19551bca2" (UID: "5f529308-d964-4fae-a3d4-b5d19551bca2"). InnerVolumeSpecName "kube-api-access-th7lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:42:52 crc kubenswrapper[4814]: I1202 14:42:52.321099 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th7lq\" (UniqueName: \"kubernetes.io/projected/5f529308-d964-4fae-a3d4-b5d19551bca2-kube-api-access-th7lq\") on node \"crc\" DevicePath \"\"" Dec 02 14:42:52 crc kubenswrapper[4814]: I1202 14:42:52.475597 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-k2f5m"] Dec 02 14:42:52 crc kubenswrapper[4814]: I1202 14:42:52.480814 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-k2f5m"] Dec 02 14:42:54 crc kubenswrapper[4814]: I1202 14:42:54.153103 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f529308-d964-4fae-a3d4-b5d19551bca2" path="/var/lib/kubelet/pods/5f529308-d964-4fae-a3d4-b5d19551bca2/volumes" Dec 02 14:42:58 crc kubenswrapper[4814]: I1202 14:42:58.404490 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-vbb99" Dec 02 14:42:58 crc kubenswrapper[4814]: I1202 14:42:58.406019 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-vbb99" Dec 02 14:42:58 crc kubenswrapper[4814]: I1202 14:42:58.455017 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-vbb99" Dec 02 14:42:59 crc kubenswrapper[4814]: I1202 14:42:59.199469 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-vbb99" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.369743 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c"] Dec 02 14:43:06 crc kubenswrapper[4814]: E1202 14:43:06.370485 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f529308-d964-4fae-a3d4-b5d19551bca2" containerName="registry-server" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.370508 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f529308-d964-4fae-a3d4-b5d19551bca2" containerName="registry-server" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.370708 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f529308-d964-4fae-a3d4-b5d19551bca2" containerName="registry-server" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.372146 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.375537 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-8zw7q" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.383303 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c"] Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.539603 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78b31a07-98de-43ef-994f-f7781fac719f-util\") pod \"97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c\" (UID: \"78b31a07-98de-43ef-994f-f7781fac719f\") " pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.539867 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78b31a07-98de-43ef-994f-f7781fac719f-bundle\") pod \"97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c\" (UID: \"78b31a07-98de-43ef-994f-f7781fac719f\") " pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.539941 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flgb2\" (UniqueName: \"kubernetes.io/projected/78b31a07-98de-43ef-994f-f7781fac719f-kube-api-access-flgb2\") pod \"97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c\" (UID: \"78b31a07-98de-43ef-994f-f7781fac719f\") " pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.641489 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78b31a07-98de-43ef-994f-f7781fac719f-bundle\") pod \"97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c\" (UID: \"78b31a07-98de-43ef-994f-f7781fac719f\") " pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.641797 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flgb2\" (UniqueName: \"kubernetes.io/projected/78b31a07-98de-43ef-994f-f7781fac719f-kube-api-access-flgb2\") pod \"97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c\" (UID: \"78b31a07-98de-43ef-994f-f7781fac719f\") " pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.641898 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78b31a07-98de-43ef-994f-f7781fac719f-util\") pod \"97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c\" (UID: \"78b31a07-98de-43ef-994f-f7781fac719f\") " pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.642215 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78b31a07-98de-43ef-994f-f7781fac719f-bundle\") pod \"97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c\" (UID: \"78b31a07-98de-43ef-994f-f7781fac719f\") " pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.642510 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78b31a07-98de-43ef-994f-f7781fac719f-util\") pod \"97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c\" (UID: \"78b31a07-98de-43ef-994f-f7781fac719f\") " pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.668509 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flgb2\" (UniqueName: \"kubernetes.io/projected/78b31a07-98de-43ef-994f-f7781fac719f-kube-api-access-flgb2\") pod \"97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c\" (UID: \"78b31a07-98de-43ef-994f-f7781fac719f\") " pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" Dec 02 14:43:06 crc kubenswrapper[4814]: I1202 14:43:06.699969 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" Dec 02 14:43:07 crc kubenswrapper[4814]: I1202 14:43:07.163705 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c"] Dec 02 14:43:07 crc kubenswrapper[4814]: I1202 14:43:07.245273 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" event={"ID":"78b31a07-98de-43ef-994f-f7781fac719f","Type":"ContainerStarted","Data":"7f9a0be954c3facb753581392b39830bf9b8bad119227998a6bba79f3ecae9e3"} Dec 02 14:43:08 crc kubenswrapper[4814]: I1202 14:43:08.252034 4814 generic.go:334] "Generic (PLEG): container finished" podID="78b31a07-98de-43ef-994f-f7781fac719f" containerID="4ec727b57826b70acfeb34ec0ab07abc2650f540616e698ceef9a5747bc3098e" exitCode=0 Dec 02 14:43:08 crc kubenswrapper[4814]: I1202 14:43:08.252358 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" event={"ID":"78b31a07-98de-43ef-994f-f7781fac719f","Type":"ContainerDied","Data":"4ec727b57826b70acfeb34ec0ab07abc2650f540616e698ceef9a5747bc3098e"} Dec 02 14:43:09 crc kubenswrapper[4814]: I1202 14:43:09.258865 4814 generic.go:334] "Generic (PLEG): container finished" podID="78b31a07-98de-43ef-994f-f7781fac719f" containerID="39cfbe3440f1ecc65de16765f6c0f527b1b9d68b2e2e1256d65696c98a567bd9" exitCode=0 Dec 02 14:43:09 crc kubenswrapper[4814]: I1202 14:43:09.258914 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" event={"ID":"78b31a07-98de-43ef-994f-f7781fac719f","Type":"ContainerDied","Data":"39cfbe3440f1ecc65de16765f6c0f527b1b9d68b2e2e1256d65696c98a567bd9"} Dec 02 14:43:10 crc kubenswrapper[4814]: I1202 14:43:10.268167 4814 generic.go:334] "Generic (PLEG): container finished" podID="78b31a07-98de-43ef-994f-f7781fac719f" containerID="104005bdc2da81df1f72bea0760bb492f100a3c72e30f6aadbedef8db5fd8c4d" exitCode=0 Dec 02 14:43:10 crc kubenswrapper[4814]: I1202 14:43:10.268293 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" event={"ID":"78b31a07-98de-43ef-994f-f7781fac719f","Type":"ContainerDied","Data":"104005bdc2da81df1f72bea0760bb492f100a3c72e30f6aadbedef8db5fd8c4d"} Dec 02 14:43:11 crc kubenswrapper[4814]: I1202 14:43:11.525010 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" Dec 02 14:43:11 crc kubenswrapper[4814]: I1202 14:43:11.717806 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78b31a07-98de-43ef-994f-f7781fac719f-bundle\") pod \"78b31a07-98de-43ef-994f-f7781fac719f\" (UID: \"78b31a07-98de-43ef-994f-f7781fac719f\") " Dec 02 14:43:11 crc kubenswrapper[4814]: I1202 14:43:11.717871 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78b31a07-98de-43ef-994f-f7781fac719f-util\") pod \"78b31a07-98de-43ef-994f-f7781fac719f\" (UID: \"78b31a07-98de-43ef-994f-f7781fac719f\") " Dec 02 14:43:11 crc kubenswrapper[4814]: I1202 14:43:11.717979 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flgb2\" (UniqueName: \"kubernetes.io/projected/78b31a07-98de-43ef-994f-f7781fac719f-kube-api-access-flgb2\") pod \"78b31a07-98de-43ef-994f-f7781fac719f\" (UID: \"78b31a07-98de-43ef-994f-f7781fac719f\") " Dec 02 14:43:11 crc kubenswrapper[4814]: I1202 14:43:11.718695 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78b31a07-98de-43ef-994f-f7781fac719f-bundle" (OuterVolumeSpecName: "bundle") pod "78b31a07-98de-43ef-994f-f7781fac719f" (UID: "78b31a07-98de-43ef-994f-f7781fac719f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:43:11 crc kubenswrapper[4814]: I1202 14:43:11.724452 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78b31a07-98de-43ef-994f-f7781fac719f-kube-api-access-flgb2" (OuterVolumeSpecName: "kube-api-access-flgb2") pod "78b31a07-98de-43ef-994f-f7781fac719f" (UID: "78b31a07-98de-43ef-994f-f7781fac719f"). InnerVolumeSpecName "kube-api-access-flgb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:43:11 crc kubenswrapper[4814]: I1202 14:43:11.741893 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78b31a07-98de-43ef-994f-f7781fac719f-util" (OuterVolumeSpecName: "util") pod "78b31a07-98de-43ef-994f-f7781fac719f" (UID: "78b31a07-98de-43ef-994f-f7781fac719f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:43:11 crc kubenswrapper[4814]: I1202 14:43:11.819571 4814 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78b31a07-98de-43ef-994f-f7781fac719f-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:43:11 crc kubenswrapper[4814]: I1202 14:43:11.819611 4814 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78b31a07-98de-43ef-994f-f7781fac719f-util\") on node \"crc\" DevicePath \"\"" Dec 02 14:43:11 crc kubenswrapper[4814]: I1202 14:43:11.819627 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flgb2\" (UniqueName: \"kubernetes.io/projected/78b31a07-98de-43ef-994f-f7781fac719f-kube-api-access-flgb2\") on node \"crc\" DevicePath \"\"" Dec 02 14:43:12 crc kubenswrapper[4814]: I1202 14:43:12.286815 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" event={"ID":"78b31a07-98de-43ef-994f-f7781fac719f","Type":"ContainerDied","Data":"7f9a0be954c3facb753581392b39830bf9b8bad119227998a6bba79f3ecae9e3"} Dec 02 14:43:12 crc kubenswrapper[4814]: I1202 14:43:12.287142 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f9a0be954c3facb753581392b39830bf9b8bad119227998a6bba79f3ecae9e3" Dec 02 14:43:12 crc kubenswrapper[4814]: I1202 14:43:12.286935 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c" Dec 02 14:43:18 crc kubenswrapper[4814]: I1202 14:43:18.530887 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv"] Dec 02 14:43:18 crc kubenswrapper[4814]: E1202 14:43:18.535837 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b31a07-98de-43ef-994f-f7781fac719f" containerName="pull" Dec 02 14:43:18 crc kubenswrapper[4814]: I1202 14:43:18.535888 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b31a07-98de-43ef-994f-f7781fac719f" containerName="pull" Dec 02 14:43:18 crc kubenswrapper[4814]: E1202 14:43:18.535924 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b31a07-98de-43ef-994f-f7781fac719f" containerName="extract" Dec 02 14:43:18 crc kubenswrapper[4814]: I1202 14:43:18.535931 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b31a07-98de-43ef-994f-f7781fac719f" containerName="extract" Dec 02 14:43:18 crc kubenswrapper[4814]: E1202 14:43:18.535970 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b31a07-98de-43ef-994f-f7781fac719f" containerName="util" Dec 02 14:43:18 crc kubenswrapper[4814]: I1202 14:43:18.535975 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b31a07-98de-43ef-994f-f7781fac719f" containerName="util" Dec 02 14:43:18 crc kubenswrapper[4814]: I1202 14:43:18.536364 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="78b31a07-98de-43ef-994f-f7781fac719f" containerName="extract" Dec 02 14:43:18 crc kubenswrapper[4814]: I1202 14:43:18.537086 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" Dec 02 14:43:18 crc kubenswrapper[4814]: I1202 14:43:18.540805 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-fnjc8" Dec 02 14:43:18 crc kubenswrapper[4814]: I1202 14:43:18.604375 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv"] Dec 02 14:43:18 crc kubenswrapper[4814]: I1202 14:43:18.727934 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh9rz\" (UniqueName: \"kubernetes.io/projected/5e592dcb-a77d-4bf2-8600-7655ad2b6e81-kube-api-access-kh9rz\") pod \"openstack-operator-controller-operator-748bc85fdb-87vnv\" (UID: \"5e592dcb-a77d-4bf2-8600-7655ad2b6e81\") " pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" Dec 02 14:43:18 crc kubenswrapper[4814]: I1202 14:43:18.829071 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh9rz\" (UniqueName: \"kubernetes.io/projected/5e592dcb-a77d-4bf2-8600-7655ad2b6e81-kube-api-access-kh9rz\") pod \"openstack-operator-controller-operator-748bc85fdb-87vnv\" (UID: \"5e592dcb-a77d-4bf2-8600-7655ad2b6e81\") " pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" Dec 02 14:43:18 crc kubenswrapper[4814]: I1202 14:43:18.864181 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh9rz\" (UniqueName: \"kubernetes.io/projected/5e592dcb-a77d-4bf2-8600-7655ad2b6e81-kube-api-access-kh9rz\") pod \"openstack-operator-controller-operator-748bc85fdb-87vnv\" (UID: \"5e592dcb-a77d-4bf2-8600-7655ad2b6e81\") " pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" Dec 02 14:43:18 crc kubenswrapper[4814]: I1202 14:43:18.927978 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" Dec 02 14:43:19 crc kubenswrapper[4814]: I1202 14:43:19.185877 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv"] Dec 02 14:43:19 crc kubenswrapper[4814]: I1202 14:43:19.343959 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" event={"ID":"5e592dcb-a77d-4bf2-8600-7655ad2b6e81","Type":"ContainerStarted","Data":"a580be4bd65312bea494248e73abd7590c4d3b2f46f0441bb2f56dbec48d7d56"} Dec 02 14:43:24 crc kubenswrapper[4814]: I1202 14:43:24.388982 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" event={"ID":"5e592dcb-a77d-4bf2-8600-7655ad2b6e81","Type":"ContainerStarted","Data":"a775ab05d3f01cc5c1c6858dd49543c4236bbdf7f4cbd303027aaf70f12bac69"} Dec 02 14:43:24 crc kubenswrapper[4814]: I1202 14:43:24.389959 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" Dec 02 14:43:24 crc kubenswrapper[4814]: I1202 14:43:24.426723 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" podStartSLOduration=1.665922557 podStartE2EDuration="6.426696873s" podCreationTimestamp="2025-12-02 14:43:18 +0000 UTC" firstStartedPulling="2025-12-02 14:43:19.211136623 +0000 UTC m=+861.335498827" lastFinishedPulling="2025-12-02 14:43:23.971910919 +0000 UTC m=+866.096273143" observedRunningTime="2025-12-02 14:43:24.417597868 +0000 UTC m=+866.541960122" watchObservedRunningTime="2025-12-02 14:43:24.426696873 +0000 UTC m=+866.551059087" Dec 02 14:43:38 crc kubenswrapper[4814]: I1202 14:43:38.932673 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.929399 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-hkkjn"] Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.931541 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hkkjn" Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.934072 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lz27g" Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.941105 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-d268c"] Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.942557 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d268c" Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.947664 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-lp4h5" Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.949409 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-hkkjn"] Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.955746 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-57bd6"] Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.956945 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-57bd6" Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.960496 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8vr6q" Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.962734 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-d268c"] Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.976287 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-57bd6"] Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.982968 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw"] Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.984129 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw" Dec 02 14:43:57 crc kubenswrapper[4814]: I1202 14:43:57.990668 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ddq6c" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.006035 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-knqvz"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.007138 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-knqvz" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.009310 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mxnvf" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.016070 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.038300 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.039684 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.042053 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-fdbq7" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.054371 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twvrq\" (UniqueName: \"kubernetes.io/projected/0848a668-6f55-429e-951d-ef285f20a1b6-kube-api-access-twvrq\") pod \"cinder-operator-controller-manager-859b6ccc6-d268c\" (UID: \"0848a668-6f55-429e-951d-ef285f20a1b6\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d268c" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.054447 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x4r7\" (UniqueName: \"kubernetes.io/projected/a2a6620c-b75b-4223-ac5c-721a57d69a06-kube-api-access-2x4r7\") pod \"designate-operator-controller-manager-78b4bc895b-57bd6\" (UID: \"a2a6620c-b75b-4223-ac5c-721a57d69a06\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-57bd6" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.054531 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btg8v\" (UniqueName: \"kubernetes.io/projected/d9fc1bb3-efc9-4832-a2d6-b60b3565fb54-kube-api-access-btg8v\") pod \"barbican-operator-controller-manager-7d9dfd778-hkkjn\" (UID: \"d9fc1bb3-efc9-4832-a2d6-b60b3565fb54\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hkkjn" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.055431 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-knqvz"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.061531 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.074076 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.075163 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.084543 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-25b2c" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.088362 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.091100 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.097371 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.098470 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.104250 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-d8fh7" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.111513 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-sclz2"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.112748 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sclz2" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.116122 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ng9zr" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.119945 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.142046 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-sclz2"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.158365 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnf54\" (UniqueName: \"kubernetes.io/projected/2bd51779-c0ec-438d-ad69-a737752b9b90-kube-api-access-bnf54\") pod \"glance-operator-controller-manager-77987cd8cd-7ttvw\" (UID: \"2bd51779-c0ec-438d-ad69-a737752b9b90\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.164576 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twvrq\" (UniqueName: \"kubernetes.io/projected/0848a668-6f55-429e-951d-ef285f20a1b6-kube-api-access-twvrq\") pod \"cinder-operator-controller-manager-859b6ccc6-d268c\" (UID: \"0848a668-6f55-429e-951d-ef285f20a1b6\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d268c" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.165674 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x4r7\" (UniqueName: \"kubernetes.io/projected/a2a6620c-b75b-4223-ac5c-721a57d69a06-kube-api-access-2x4r7\") pod \"designate-operator-controller-manager-78b4bc895b-57bd6\" (UID: \"a2a6620c-b75b-4223-ac5c-721a57d69a06\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-57bd6" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.168654 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m9bh\" (UniqueName: \"kubernetes.io/projected/37189c89-1420-498e-bcb9-d42e0cd65ba9-kube-api-access-4m9bh\") pod \"horizon-operator-controller-manager-68c6d99b8f-z92q9\" (UID: \"37189c89-1420-498e-bcb9-d42e0cd65ba9\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.168898 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btg8v\" (UniqueName: \"kubernetes.io/projected/d9fc1bb3-efc9-4832-a2d6-b60b3565fb54-kube-api-access-btg8v\") pod \"barbican-operator-controller-manager-7d9dfd778-hkkjn\" (UID: \"d9fc1bb3-efc9-4832-a2d6-b60b3565fb54\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hkkjn" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.169038 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf6j2\" (UniqueName: \"kubernetes.io/projected/a72143d5-40c2-4fbe-87de-c3be61143c9e-kube-api-access-cf6j2\") pod \"heat-operator-controller-manager-5f64f6f8bb-knqvz\" (UID: \"a72143d5-40c2-4fbe-87de-c3be61143c9e\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-knqvz" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.225256 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twvrq\" (UniqueName: \"kubernetes.io/projected/0848a668-6f55-429e-951d-ef285f20a1b6-kube-api-access-twvrq\") pod \"cinder-operator-controller-manager-859b6ccc6-d268c\" (UID: \"0848a668-6f55-429e-951d-ef285f20a1b6\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d268c" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.230942 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btg8v\" (UniqueName: \"kubernetes.io/projected/d9fc1bb3-efc9-4832-a2d6-b60b3565fb54-kube-api-access-btg8v\") pod \"barbican-operator-controller-manager-7d9dfd778-hkkjn\" (UID: \"d9fc1bb3-efc9-4832-a2d6-b60b3565fb54\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hkkjn" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.232788 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x4r7\" (UniqueName: \"kubernetes.io/projected/a2a6620c-b75b-4223-ac5c-721a57d69a06-kube-api-access-2x4r7\") pod \"designate-operator-controller-manager-78b4bc895b-57bd6\" (UID: \"a2a6620c-b75b-4223-ac5c-721a57d69a06\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-57bd6" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.265607 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lz27g" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.267966 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hkkjn" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.268367 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-lp4h5" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.273969 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m9bh\" (UniqueName: \"kubernetes.io/projected/37189c89-1420-498e-bcb9-d42e0cd65ba9-kube-api-access-4m9bh\") pod \"horizon-operator-controller-manager-68c6d99b8f-z92q9\" (UID: \"37189c89-1420-498e-bcb9-d42e0cd65ba9\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.274050 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf6j2\" (UniqueName: \"kubernetes.io/projected/a72143d5-40c2-4fbe-87de-c3be61143c9e-kube-api-access-cf6j2\") pod \"heat-operator-controller-manager-5f64f6f8bb-knqvz\" (UID: \"a72143d5-40c2-4fbe-87de-c3be61143c9e\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-knqvz" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.274080 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk67m\" (UniqueName: \"kubernetes.io/projected/41513450-3327-470f-b978-b5fa13c72991-kube-api-access-zk67m\") pod \"ironic-operator-controller-manager-6c548fd776-rgnvj\" (UID: \"41513450-3327-470f-b978-b5fa13c72991\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.274115 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc7j4\" (UniqueName: \"kubernetes.io/projected/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-kube-api-access-gc7j4\") pod \"infra-operator-controller-manager-57548d458d-h9gcz\" (UID: \"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.274170 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lfj\" (UniqueName: \"kubernetes.io/projected/ce263a59-b81e-4394-9dbd-fe091bc7075f-kube-api-access-k9lfj\") pod \"keystone-operator-controller-manager-7765d96ddf-sclz2\" (UID: \"ce263a59-b81e-4394-9dbd-fe091bc7075f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sclz2" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.274194 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnf54\" (UniqueName: \"kubernetes.io/projected/2bd51779-c0ec-438d-ad69-a737752b9b90-kube-api-access-bnf54\") pod \"glance-operator-controller-manager-77987cd8cd-7ttvw\" (UID: \"2bd51779-c0ec-438d-ad69-a737752b9b90\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.274280 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert\") pod \"infra-operator-controller-manager-57548d458d-h9gcz\" (UID: \"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.277389 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d268c" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.287122 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8vr6q" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.288418 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-57bd6" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.306655 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf6j2\" (UniqueName: \"kubernetes.io/projected/a72143d5-40c2-4fbe-87de-c3be61143c9e-kube-api-access-cf6j2\") pod \"heat-operator-controller-manager-5f64f6f8bb-knqvz\" (UID: \"a72143d5-40c2-4fbe-87de-c3be61143c9e\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-knqvz" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.315272 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m9bh\" (UniqueName: \"kubernetes.io/projected/37189c89-1420-498e-bcb9-d42e0cd65ba9-kube-api-access-4m9bh\") pod \"horizon-operator-controller-manager-68c6d99b8f-z92q9\" (UID: \"37189c89-1420-498e-bcb9-d42e0cd65ba9\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.327440 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mxnvf" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.334901 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnf54\" (UniqueName: \"kubernetes.io/projected/2bd51779-c0ec-438d-ad69-a737752b9b90-kube-api-access-bnf54\") pod \"glance-operator-controller-manager-77987cd8cd-7ttvw\" (UID: \"2bd51779-c0ec-438d-ad69-a737752b9b90\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.335008 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-knqvz" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.362870 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-fdbq7" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.371570 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.375082 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk67m\" (UniqueName: \"kubernetes.io/projected/41513450-3327-470f-b978-b5fa13c72991-kube-api-access-zk67m\") pod \"ironic-operator-controller-manager-6c548fd776-rgnvj\" (UID: \"41513450-3327-470f-b978-b5fa13c72991\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.375123 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc7j4\" (UniqueName: \"kubernetes.io/projected/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-kube-api-access-gc7j4\") pod \"infra-operator-controller-manager-57548d458d-h9gcz\" (UID: \"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.375175 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9lfj\" (UniqueName: \"kubernetes.io/projected/ce263a59-b81e-4394-9dbd-fe091bc7075f-kube-api-access-k9lfj\") pod \"keystone-operator-controller-manager-7765d96ddf-sclz2\" (UID: \"ce263a59-b81e-4394-9dbd-fe091bc7075f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sclz2" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.375215 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert\") pod \"infra-operator-controller-manager-57548d458d-h9gcz\" (UID: \"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.380258 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 02 14:43:58 crc kubenswrapper[4814]: E1202 14:43:58.385414 4814 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:43:58 crc kubenswrapper[4814]: E1202 14:43:58.385502 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert podName:2f89b1a9-fb43-4ef5-a8f7-9a15b219022c nodeName:}" failed. No retries permitted until 2025-12-02 14:43:58.885479507 +0000 UTC m=+901.009841711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert") pod "infra-operator-controller-manager-57548d458d-h9gcz" (UID: "2f89b1a9-fb43-4ef5-a8f7-9a15b219022c") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.388930 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.390107 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.390132 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.391169 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.391255 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.391182 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.393322 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.393443 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.396175 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-pnqln" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.396729 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pr2dk" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.398984 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.401182 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6nmk4" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.403652 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk67m\" (UniqueName: \"kubernetes.io/projected/41513450-3327-470f-b978-b5fa13c72991-kube-api-access-zk67m\") pod \"ironic-operator-controller-manager-6c548fd776-rgnvj\" (UID: \"41513450-3327-470f-b978-b5fa13c72991\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.404101 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9lfj\" (UniqueName: \"kubernetes.io/projected/ce263a59-b81e-4394-9dbd-fe091bc7075f-kube-api-access-k9lfj\") pod \"keystone-operator-controller-manager-7765d96ddf-sclz2\" (UID: \"ce263a59-b81e-4394-9dbd-fe091bc7075f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sclz2" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.408602 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.408769 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.408860 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.410043 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.408877 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.415332 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc7j4\" (UniqueName: \"kubernetes.io/projected/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-kube-api-access-gc7j4\") pod \"infra-operator-controller-manager-57548d458d-h9gcz\" (UID: \"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.420367 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.421611 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-b4v6q" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.421724 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-tcmcf" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.421900 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.434296 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.435787 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.438949 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.439031 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-qgm5q" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.441412 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-w6xr2"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.443875 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-w6xr2" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.445788 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-w6xr2"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.446645 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-8s6fr" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.451939 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.453089 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.455369 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-ggcq5" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.470311 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.478129 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bblsk\" (UniqueName: \"kubernetes.io/projected/b362d2bd-46b7-478b-af11-125a0d2fd5b0-kube-api-access-bblsk\") pod \"mariadb-operator-controller-manager-56bbcc9d85-q7sbm\" (UID: \"b362d2bd-46b7-478b-af11-125a0d2fd5b0\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.478281 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkvh2\" (UniqueName: \"kubernetes.io/projected/9c15f550-1056-421a-aee0-301d768590c6-kube-api-access-lkvh2\") pod \"octavia-operator-controller-manager-998648c74-l5bdj\" (UID: \"9c15f550-1056-421a-aee0-301d768590c6\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.478307 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb4v5\" (UniqueName: \"kubernetes.io/projected/f36f9c57-13d7-48e0-b519-89e87670e082-kube-api-access-wb4v5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-9nxkr\" (UID: \"f36f9c57-13d7-48e0-b519-89e87670e082\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.478328 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87h64\" (UniqueName: \"kubernetes.io/projected/88f105c6-4510-46e0-8be8-dc360a9b73ce-kube-api-access-87h64\") pod \"nova-operator-controller-manager-697bc559fc-ngzqv\" (UID: \"88f105c6-4510-46e0-8be8-dc360a9b73ce\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.478386 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qqgh\" (UniqueName: \"kubernetes.io/projected/5d626448-a1ab-4d7a-bc09-d0777abad82d-kube-api-access-7qqgh\") pod \"manila-operator-controller-manager-7c79b5df47-zbs77\" (UID: \"5d626448-a1ab-4d7a-bc09-d0777abad82d\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.486611 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.498508 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.500489 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.502377 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.502995 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zcjcr" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.521208 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.522637 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.523011 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sclz2" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.528000 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-pdg2x" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.565536 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.595689 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bblsk\" (UniqueName: \"kubernetes.io/projected/b362d2bd-46b7-478b-af11-125a0d2fd5b0-kube-api-access-bblsk\") pod \"mariadb-operator-controller-manager-56bbcc9d85-q7sbm\" (UID: \"b362d2bd-46b7-478b-af11-125a0d2fd5b0\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.602483 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzbrj\" (UniqueName: \"kubernetes.io/projected/491246a7-1bdb-4efc-99a8-16ab9502d44d-kube-api-access-jzbrj\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld\" (UID: \"491246a7-1bdb-4efc-99a8-16ab9502d44d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.602585 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkvh2\" (UniqueName: \"kubernetes.io/projected/9c15f550-1056-421a-aee0-301d768590c6-kube-api-access-lkvh2\") pod \"octavia-operator-controller-manager-998648c74-l5bdj\" (UID: \"9c15f550-1056-421a-aee0-301d768590c6\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.602647 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb4v5\" (UniqueName: \"kubernetes.io/projected/f36f9c57-13d7-48e0-b519-89e87670e082-kube-api-access-wb4v5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-9nxkr\" (UID: \"f36f9c57-13d7-48e0-b519-89e87670e082\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.602684 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87h64\" (UniqueName: \"kubernetes.io/projected/88f105c6-4510-46e0-8be8-dc360a9b73ce-kube-api-access-87h64\") pod \"nova-operator-controller-manager-697bc559fc-ngzqv\" (UID: \"88f105c6-4510-46e0-8be8-dc360a9b73ce\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.602756 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz9gd\" (UniqueName: \"kubernetes.io/projected/94a7cab9-3ba9-49c1-b468-b42424e3c43c-kube-api-access-nz9gd\") pod \"ovn-operator-controller-manager-b6456fdb6-w6xr2\" (UID: \"94a7cab9-3ba9-49c1-b468-b42424e3c43c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-w6xr2" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.602839 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nzwk\" (UniqueName: \"kubernetes.io/projected/8e50493d-4c39-4cb7-b752-714c0520e559-kube-api-access-2nzwk\") pod \"placement-operator-controller-manager-78f8948974-kxkhr\" (UID: \"8e50493d-4c39-4cb7-b752-714c0520e559\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.602902 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29xlr\" (UniqueName: \"kubernetes.io/projected/ac66ba40-6166-47b9-9c47-f0f0637e774c-kube-api-access-29xlr\") pod \"swift-operator-controller-manager-5f8c65bbfc-fj2tl\" (UID: \"ac66ba40-6166-47b9-9c47-f0f0637e774c\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.602973 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld\" (UID: \"491246a7-1bdb-4efc-99a8-16ab9502d44d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.603196 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qqgh\" (UniqueName: \"kubernetes.io/projected/5d626448-a1ab-4d7a-bc09-d0777abad82d-kube-api-access-7qqgh\") pod \"manila-operator-controller-manager-7c79b5df47-zbs77\" (UID: \"5d626448-a1ab-4d7a-bc09-d0777abad82d\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.615616 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ddq6c" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.623698 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.678146 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87h64\" (UniqueName: \"kubernetes.io/projected/88f105c6-4510-46e0-8be8-dc360a9b73ce-kube-api-access-87h64\") pod \"nova-operator-controller-manager-697bc559fc-ngzqv\" (UID: \"88f105c6-4510-46e0-8be8-dc360a9b73ce\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.689197 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkvh2\" (UniqueName: \"kubernetes.io/projected/9c15f550-1056-421a-aee0-301d768590c6-kube-api-access-lkvh2\") pod \"octavia-operator-controller-manager-998648c74-l5bdj\" (UID: \"9c15f550-1056-421a-aee0-301d768590c6\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.695733 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bblsk\" (UniqueName: \"kubernetes.io/projected/b362d2bd-46b7-478b-af11-125a0d2fd5b0-kube-api-access-bblsk\") pod \"mariadb-operator-controller-manager-56bbcc9d85-q7sbm\" (UID: \"b362d2bd-46b7-478b-af11-125a0d2fd5b0\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.703624 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qqgh\" (UniqueName: \"kubernetes.io/projected/5d626448-a1ab-4d7a-bc09-d0777abad82d-kube-api-access-7qqgh\") pod \"manila-operator-controller-manager-7c79b5df47-zbs77\" (UID: \"5d626448-a1ab-4d7a-bc09-d0777abad82d\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.706599 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-cskhk"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.719376 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-cskhk" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.721124 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhlfd\" (UniqueName: \"kubernetes.io/projected/55d8049a-e6f9-4ace-ada2-8fc96a06f1e4-kube-api-access-xhlfd\") pod \"telemetry-operator-controller-manager-76cc84c6bb-mrbvm\" (UID: \"55d8049a-e6f9-4ace-ada2-8fc96a06f1e4\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.721172 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz9gd\" (UniqueName: \"kubernetes.io/projected/94a7cab9-3ba9-49c1-b468-b42424e3c43c-kube-api-access-nz9gd\") pod \"ovn-operator-controller-manager-b6456fdb6-w6xr2\" (UID: \"94a7cab9-3ba9-49c1-b468-b42424e3c43c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-w6xr2" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.721222 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nzwk\" (UniqueName: \"kubernetes.io/projected/8e50493d-4c39-4cb7-b752-714c0520e559-kube-api-access-2nzwk\") pod \"placement-operator-controller-manager-78f8948974-kxkhr\" (UID: \"8e50493d-4c39-4cb7-b752-714c0520e559\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.721260 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29xlr\" (UniqueName: \"kubernetes.io/projected/ac66ba40-6166-47b9-9c47-f0f0637e774c-kube-api-access-29xlr\") pod \"swift-operator-controller-manager-5f8c65bbfc-fj2tl\" (UID: \"ac66ba40-6166-47b9-9c47-f0f0637e774c\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.721287 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld\" (UID: \"491246a7-1bdb-4efc-99a8-16ab9502d44d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.721471 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzbrj\" (UniqueName: \"kubernetes.io/projected/491246a7-1bdb-4efc-99a8-16ab9502d44d-kube-api-access-jzbrj\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld\" (UID: \"491246a7-1bdb-4efc-99a8-16ab9502d44d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:43:58 crc kubenswrapper[4814]: E1202 14:43:58.721608 4814 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:43:58 crc kubenswrapper[4814]: E1202 14:43:58.721656 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert podName:491246a7-1bdb-4efc-99a8-16ab9502d44d nodeName:}" failed. No retries permitted until 2025-12-02 14:43:59.221642108 +0000 UTC m=+901.346004312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" (UID: "491246a7-1bdb-4efc-99a8-16ab9502d44d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.722191 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-zxqd2" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.752105 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-cskhk"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.771072 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29xlr\" (UniqueName: \"kubernetes.io/projected/ac66ba40-6166-47b9-9c47-f0f0637e774c-kube-api-access-29xlr\") pod \"swift-operator-controller-manager-5f8c65bbfc-fj2tl\" (UID: \"ac66ba40-6166-47b9-9c47-f0f0637e774c\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.777769 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.795043 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzbrj\" (UniqueName: \"kubernetes.io/projected/491246a7-1bdb-4efc-99a8-16ab9502d44d-kube-api-access-jzbrj\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld\" (UID: \"491246a7-1bdb-4efc-99a8-16ab9502d44d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.795994 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz9gd\" (UniqueName: \"kubernetes.io/projected/94a7cab9-3ba9-49c1-b468-b42424e3c43c-kube-api-access-nz9gd\") pod \"ovn-operator-controller-manager-b6456fdb6-w6xr2\" (UID: \"94a7cab9-3ba9-49c1-b468-b42424e3c43c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-w6xr2" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.808291 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.809473 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.811159 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-ts267" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.823034 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhlfd\" (UniqueName: \"kubernetes.io/projected/55d8049a-e6f9-4ace-ada2-8fc96a06f1e4-kube-api-access-xhlfd\") pod \"telemetry-operator-controller-manager-76cc84c6bb-mrbvm\" (UID: \"55d8049a-e6f9-4ace-ada2-8fc96a06f1e4\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.823166 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbgjc\" (UniqueName: \"kubernetes.io/projected/391bcd0f-854f-43c3-ac40-a917e68bf3b8-kube-api-access-fbgjc\") pod \"test-operator-controller-manager-5854674fcc-cskhk\" (UID: \"391bcd0f-854f-43c3-ac40-a917e68bf3b8\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-cskhk" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.830986 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.842708 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.915130 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.926864 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbgjc\" (UniqueName: \"kubernetes.io/projected/391bcd0f-854f-43c3-ac40-a917e68bf3b8-kube-api-access-fbgjc\") pod \"test-operator-controller-manager-5854674fcc-cskhk\" (UID: \"391bcd0f-854f-43c3-ac40-a917e68bf3b8\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-cskhk" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.926913 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert\") pod \"infra-operator-controller-manager-57548d458d-h9gcz\" (UID: \"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.927006 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8t88\" (UniqueName: \"kubernetes.io/projected/4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad-kube-api-access-h8t88\") pod \"watcher-operator-controller-manager-b76b9c9d7-l6nn6\" (UID: \"4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad\") " pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" Dec 02 14:43:58 crc kubenswrapper[4814]: E1202 14:43:58.927755 4814 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:43:58 crc kubenswrapper[4814]: E1202 14:43:58.929593 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert podName:2f89b1a9-fb43-4ef5-a8f7-9a15b219022c nodeName:}" failed. No retries permitted until 2025-12-02 14:43:59.929569971 +0000 UTC m=+902.053932175 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert") pod "infra-operator-controller-manager-57548d458d-h9gcz" (UID: "2f89b1a9-fb43-4ef5-a8f7-9a15b219022c") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.939408 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.940539 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.943645 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-g2hp5" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.943789 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.944429 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.949175 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-w6xr2" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.957078 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.958827 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.991104 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-57776"] Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.992241 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-57776" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.994373 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zq5bs" Dec 02 14:43:58 crc kubenswrapper[4814]: I1202 14:43:58.995812 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:58.998680 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-57776"] Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.028078 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhf4c\" (UniqueName: \"kubernetes.io/projected/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-kube-api-access-vhf4c\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.028123 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8t88\" (UniqueName: \"kubernetes.io/projected/4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad-kube-api-access-h8t88\") pod \"watcher-operator-controller-manager-b76b9c9d7-l6nn6\" (UID: \"4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad\") " pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.028146 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.028164 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.028302 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvvhz\" (UniqueName: \"kubernetes.io/projected/c288b87f-0e35-4b3f-b0f5-1565425396ba-kube-api-access-gvvhz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-57776\" (UID: \"c288b87f-0e35-4b3f-b0f5-1565425396ba\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-57776" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.060460 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-d268c"] Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.065606 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-hkkjn"] Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.129889 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhf4c\" (UniqueName: \"kubernetes.io/projected/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-kube-api-access-vhf4c\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.129982 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.130022 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.130113 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvvhz\" (UniqueName: \"kubernetes.io/projected/c288b87f-0e35-4b3f-b0f5-1565425396ba-kube-api-access-gvvhz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-57776\" (UID: \"c288b87f-0e35-4b3f-b0f5-1565425396ba\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-57776" Dec 02 14:43:59 crc kubenswrapper[4814]: E1202 14:43:59.130191 4814 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:43:59 crc kubenswrapper[4814]: E1202 14:43:59.130300 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs podName:1d9fb058-e400-4c7c-af16-093ca5a5f1bc nodeName:}" failed. No retries permitted until 2025-12-02 14:43:59.630269882 +0000 UTC m=+901.754632086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs") pod "openstack-operator-controller-manager-59c6849d55-jttnz" (UID: "1d9fb058-e400-4c7c-af16-093ca5a5f1bc") : secret "webhook-server-cert" not found Dec 02 14:43:59 crc kubenswrapper[4814]: E1202 14:43:59.130302 4814 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:43:59 crc kubenswrapper[4814]: E1202 14:43:59.130387 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs podName:1d9fb058-e400-4c7c-af16-093ca5a5f1bc nodeName:}" failed. No retries permitted until 2025-12-02 14:43:59.630365654 +0000 UTC m=+901.754727898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs") pod "openstack-operator-controller-manager-59c6849d55-jttnz" (UID: "1d9fb058-e400-4c7c-af16-093ca5a5f1bc") : secret "metrics-server-cert" not found Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.230956 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld\" (UID: \"491246a7-1bdb-4efc-99a8-16ab9502d44d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:43:59 crc kubenswrapper[4814]: E1202 14:43:59.232207 4814 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:43:59 crc kubenswrapper[4814]: E1202 14:43:59.232350 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert podName:491246a7-1bdb-4efc-99a8-16ab9502d44d nodeName:}" failed. No retries permitted until 2025-12-02 14:44:00.232320813 +0000 UTC m=+902.356683017 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" (UID: "491246a7-1bdb-4efc-99a8-16ab9502d44d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.253962 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-knqvz"] Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.288597 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-57bd6"] Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.295280 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9"] Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.304270 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-sclz2"] Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.326046 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw"] Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.364755 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj"] Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.375950 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nzwk\" (UniqueName: \"kubernetes.io/projected/8e50493d-4c39-4cb7-b752-714c0520e559-kube-api-access-2nzwk\") pod \"placement-operator-controller-manager-78f8948974-kxkhr\" (UID: \"8e50493d-4c39-4cb7-b752-714c0520e559\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.376059 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb4v5\" (UniqueName: \"kubernetes.io/projected/f36f9c57-13d7-48e0-b519-89e87670e082-kube-api-access-wb4v5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-9nxkr\" (UID: \"f36f9c57-13d7-48e0-b519-89e87670e082\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.376503 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.385482 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8t88\" (UniqueName: \"kubernetes.io/projected/4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad-kube-api-access-h8t88\") pod \"watcher-operator-controller-manager-b76b9c9d7-l6nn6\" (UID: \"4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad\") " pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.385813 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhf4c\" (UniqueName: \"kubernetes.io/projected/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-kube-api-access-vhf4c\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.387097 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbgjc\" (UniqueName: \"kubernetes.io/projected/391bcd0f-854f-43c3-ac40-a917e68bf3b8-kube-api-access-fbgjc\") pod \"test-operator-controller-manager-5854674fcc-cskhk\" (UID: \"391bcd0f-854f-43c3-ac40-a917e68bf3b8\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-cskhk" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.386771 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvvhz\" (UniqueName: \"kubernetes.io/projected/c288b87f-0e35-4b3f-b0f5-1565425396ba-kube-api-access-gvvhz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-57776\" (UID: \"c288b87f-0e35-4b3f-b0f5-1565425396ba\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-57776" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.388017 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhlfd\" (UniqueName: \"kubernetes.io/projected/55d8049a-e6f9-4ace-ada2-8fc96a06f1e4-kube-api-access-xhlfd\") pod \"telemetry-operator-controller-manager-76cc84c6bb-mrbvm\" (UID: \"55d8049a-e6f9-4ace-ada2-8fc96a06f1e4\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.400724 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.482650 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.659255 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.659559 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:43:59 crc kubenswrapper[4814]: E1202 14:43:59.659412 4814 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:43:59 crc kubenswrapper[4814]: E1202 14:43:59.660113 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs podName:1d9fb058-e400-4c7c-af16-093ca5a5f1bc nodeName:}" failed. No retries permitted until 2025-12-02 14:44:00.660096568 +0000 UTC m=+902.784458772 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs") pod "openstack-operator-controller-manager-59c6849d55-jttnz" (UID: "1d9fb058-e400-4c7c-af16-093ca5a5f1bc") : secret "webhook-server-cert" not found Dec 02 14:43:59 crc kubenswrapper[4814]: E1202 14:43:59.659701 4814 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:43:59 crc kubenswrapper[4814]: E1202 14:43:59.660596 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs podName:1d9fb058-e400-4c7c-af16-093ca5a5f1bc nodeName:}" failed. No retries permitted until 2025-12-02 14:44:00.66058643 +0000 UTC m=+902.784948634 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs") pod "openstack-operator-controller-manager-59c6849d55-jttnz" (UID: "1d9fb058-e400-4c7c-af16-093ca5a5f1bc") : secret "metrics-server-cert" not found Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.679459 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-57776" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.679883 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.680143 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.680380 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-cskhk" Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.922784 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj"] Dec 02 14:43:59 crc kubenswrapper[4814]: W1202 14:43:59.943183 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c15f550_1056_421a_aee0_301d768590c6.slice/crio-49560710f9211f2a1adef305a734d01c5c51a08a9a90d6204a57bc3d5668d93c WatchSource:0}: Error finding container 49560710f9211f2a1adef305a734d01c5c51a08a9a90d6204a57bc3d5668d93c: Status 404 returned error can't find the container with id 49560710f9211f2a1adef305a734d01c5c51a08a9a90d6204a57bc3d5668d93c Dec 02 14:43:59 crc kubenswrapper[4814]: I1202 14:43:59.974262 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert\") pod \"infra-operator-controller-manager-57548d458d-h9gcz\" (UID: \"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:43:59 crc kubenswrapper[4814]: E1202 14:43:59.974492 4814 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:43:59 crc kubenswrapper[4814]: E1202 14:43:59.974594 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert podName:2f89b1a9-fb43-4ef5-a8f7-9a15b219022c nodeName:}" failed. No retries permitted until 2025-12-02 14:44:01.974566647 +0000 UTC m=+904.098928851 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert") pod "infra-operator-controller-manager-57548d458d-h9gcz" (UID: "2f89b1a9-fb43-4ef5-a8f7-9a15b219022c") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.036443 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm"] Dec 02 14:44:00 crc kubenswrapper[4814]: W1202 14:44:00.056052 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb362d2bd_46b7_478b_af11_125a0d2fd5b0.slice/crio-effc4b47354491ff12404899b8533ff9d732c1e127a51f567ee5e14a532b9831 WatchSource:0}: Error finding container effc4b47354491ff12404899b8533ff9d732c1e127a51f567ee5e14a532b9831: Status 404 returned error can't find the container with id effc4b47354491ff12404899b8533ff9d732c1e127a51f567ee5e14a532b9831 Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.262077 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77"] Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.269189 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr"] Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.277537 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv"] Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.283614 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld\" (UID: \"491246a7-1bdb-4efc-99a8-16ab9502d44d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.283814 4814 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.283886 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert podName:491246a7-1bdb-4efc-99a8-16ab9502d44d nodeName:}" failed. No retries permitted until 2025-12-02 14:44:02.283866494 +0000 UTC m=+904.408228698 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" (UID: "491246a7-1bdb-4efc-99a8-16ab9502d44d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.284710 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl"] Dec 02 14:44:00 crc kubenswrapper[4814]: W1202 14:44:00.293536 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88f105c6_4510_46e0_8be8_dc360a9b73ce.slice/crio-bc7c75b3aaa4e70110b47995a840be178a1325c64e1bea4aa16c8a60b05ca3c9 WatchSource:0}: Error finding container bc7c75b3aaa4e70110b47995a840be178a1325c64e1bea4aa16c8a60b05ca3c9: Status 404 returned error can't find the container with id bc7c75b3aaa4e70110b47995a840be178a1325c64e1bea4aa16c8a60b05ca3c9 Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.303797 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr"] Dec 02 14:44:00 crc kubenswrapper[4814]: W1202 14:44:00.309972 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e50493d_4c39_4cb7_b752_714c0520e559.slice/crio-35330a596f13d7663c81be54fe81190997329deefaafbf24d368037f9b3f67a1 WatchSource:0}: Error finding container 35330a596f13d7663c81be54fe81190997329deefaafbf24d368037f9b3f67a1: Status 404 returned error can't find the container with id 35330a596f13d7663c81be54fe81190997329deefaafbf24d368037f9b3f67a1 Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.319658 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29xlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-fj2tl_openstack-operators(ac66ba40-6166-47b9-9c47-f0f0637e774c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.322429 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29xlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-fj2tl_openstack-operators(ac66ba40-6166-47b9-9c47-f0f0637e774c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.323551 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl" podUID="ac66ba40-6166-47b9-9c47-f0f0637e774c" Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.344402 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl" event={"ID":"ac66ba40-6166-47b9-9c47-f0f0637e774c","Type":"ContainerStarted","Data":"6b57ac137c2429beddb534a8b1bf3258eed17ccbe5f33847e1a58051d5b69872"} Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.345659 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9" event={"ID":"37189c89-1420-498e-bcb9-d42e0cd65ba9","Type":"ContainerStarted","Data":"45c374226e350681432ef991952a3f75d0114677637688d2be58d22d9d81853d"} Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.349558 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl" podUID="ac66ba40-6166-47b9-9c47-f0f0637e774c" Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.349693 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm" event={"ID":"b362d2bd-46b7-478b-af11-125a0d2fd5b0","Type":"ContainerStarted","Data":"effc4b47354491ff12404899b8533ff9d732c1e127a51f567ee5e14a532b9831"} Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.350750 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d268c" event={"ID":"0848a668-6f55-429e-951d-ef285f20a1b6","Type":"ContainerStarted","Data":"f040c4e6f2797f794990ec02e8341783327d134e3d2ffdd5e13cbb3e05327b92"} Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.355032 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr" event={"ID":"f36f9c57-13d7-48e0-b519-89e87670e082","Type":"ContainerStarted","Data":"c25ddffcd6d4b9650783c0325f43c0b94350a2dd73a9f5d428ded504da2192ba"} Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.356921 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-knqvz" event={"ID":"a72143d5-40c2-4fbe-87de-c3be61143c9e","Type":"ContainerStarted","Data":"939eb3aebf58d743ad411528253bf618ba6865e23138e450376e285b4b7dfc6f"} Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.361144 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hkkjn" event={"ID":"d9fc1bb3-efc9-4832-a2d6-b60b3565fb54","Type":"ContainerStarted","Data":"cb526c095d056b864ab819be4ed99fd9886df7907997915128b5fa9610804993"} Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.364534 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr" event={"ID":"8e50493d-4c39-4cb7-b752-714c0520e559","Type":"ContainerStarted","Data":"35330a596f13d7663c81be54fe81190997329deefaafbf24d368037f9b3f67a1"} Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.366825 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj" event={"ID":"41513450-3327-470f-b978-b5fa13c72991","Type":"ContainerStarted","Data":"c1cb8d8ff07201390770c35134c2ceeadac97ef6f238d046117b660710789468"} Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.370356 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-57bd6" event={"ID":"a2a6620c-b75b-4223-ac5c-721a57d69a06","Type":"ContainerStarted","Data":"8e6ace7dea556d6df9fba28fa84b102ccdbb3848e975d6a9029248ae9ee34619"} Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.374180 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77" event={"ID":"5d626448-a1ab-4d7a-bc09-d0777abad82d","Type":"ContainerStarted","Data":"aa141dfe6c9f9234c2b71bb5621c61c378180026611885ae26fe6b4ff9189027"} Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.376845 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj" event={"ID":"9c15f550-1056-421a-aee0-301d768590c6","Type":"ContainerStarted","Data":"49560710f9211f2a1adef305a734d01c5c51a08a9a90d6204a57bc3d5668d93c"} Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.383555 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv" event={"ID":"88f105c6-4510-46e0-8be8-dc360a9b73ce","Type":"ContainerStarted","Data":"bc7c75b3aaa4e70110b47995a840be178a1325c64e1bea4aa16c8a60b05ca3c9"} Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.384533 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sclz2" event={"ID":"ce263a59-b81e-4394-9dbd-fe091bc7075f","Type":"ContainerStarted","Data":"b204269221b22be36741048d6397d5f2c2b22012aa4c1c1834389aaa0eb96cbc"} Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.388140 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw" event={"ID":"2bd51779-c0ec-438d-ad69-a737752b9b90","Type":"ContainerStarted","Data":"07b21d164fc93a57a3005005147b93652c838fd6f7490b0f6eca9b4312ac0296"} Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.420859 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-w6xr2"] Dec 02 14:44:00 crc kubenswrapper[4814]: W1202 14:44:00.423654 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94a7cab9_3ba9_49c1_b468_b42424e3c43c.slice/crio-3099d1a9536cb2bbcdd650e1a2d4f18199cba6a827f007c22ff262a8b91981f4 WatchSource:0}: Error finding container 3099d1a9536cb2bbcdd650e1a2d4f18199cba6a827f007c22ff262a8b91981f4: Status 404 returned error can't find the container with id 3099d1a9536cb2bbcdd650e1a2d4f18199cba6a827f007c22ff262a8b91981f4 Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.475786 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm"] Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.485418 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6"] Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.490601 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhlfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-mrbvm_openstack-operators(55d8049a-e6f9-4ace-ada2-8fc96a06f1e4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.492317 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhlfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-mrbvm_openstack-operators(55d8049a-e6f9-4ace-ada2-8fc96a06f1e4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.492590 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-cskhk"] Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.493960 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm" podUID="55d8049a-e6f9-4ace-ada2-8fc96a06f1e4" Dec 02 14:44:00 crc kubenswrapper[4814]: W1202 14:44:00.498352 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f85bd48_27d1_4e3a_8fb8_ee34af71a2ad.slice/crio-df8bec736aa6848cba96695b2956fd3c344666166df3bec3ca247b699d9a5f59 WatchSource:0}: Error finding container df8bec736aa6848cba96695b2956fd3c344666166df3bec3ca247b699d9a5f59: Status 404 returned error can't find the container with id df8bec736aa6848cba96695b2956fd3c344666166df3bec3ca247b699d9a5f59 Dec 02 14:44:00 crc kubenswrapper[4814]: W1202 14:44:00.500530 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod391bcd0f_854f_43c3_ac40_a917e68bf3b8.slice/crio-71f56706033c820ef991360fb254a692ce85917b5b019e362f5e44c442211a71 WatchSource:0}: Error finding container 71f56706033c820ef991360fb254a692ce85917b5b019e362f5e44c442211a71: Status 404 returned error can't find the container with id 71f56706033c820ef991360fb254a692ce85917b5b019e362f5e44c442211a71 Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.501492 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.158:5001/openstack-k8s-operators/watcher-operator:04f2b756afa9e575b2ebfa769c85094b932517cd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h8t88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-b76b9c9d7-l6nn6_openstack-operators(4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.501518 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-57776"] Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.504091 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h8t88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-b76b9c9d7-l6nn6_openstack-operators(4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.505299 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" podUID="4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" Dec 02 14:44:00 crc kubenswrapper[4814]: W1202 14:44:00.511043 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc288b87f_0e35_4b3f_b0f5_1565425396ba.slice/crio-94373fb9a970fcc9ef2d3e8c6a8ccaf89bbd179d2c65d9b7e5252b6369dc78d3 WatchSource:0}: Error finding container 94373fb9a970fcc9ef2d3e8c6a8ccaf89bbd179d2c65d9b7e5252b6369dc78d3: Status 404 returned error can't find the container with id 94373fb9a970fcc9ef2d3e8c6a8ccaf89bbd179d2c65d9b7e5252b6369dc78d3 Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.512185 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fbgjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-cskhk_openstack-operators(391bcd0f-854f-43c3-ac40-a917e68bf3b8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.514041 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fbgjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-cskhk_openstack-operators(391bcd0f-854f-43c3-ac40-a917e68bf3b8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.515572 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gvvhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-57776_openstack-operators(c288b87f-0e35-4b3f-b0f5-1565425396ba): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.515631 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-cskhk" podUID="391bcd0f-854f-43c3-ac40-a917e68bf3b8" Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.516629 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-57776" podUID="c288b87f-0e35-4b3f-b0f5-1565425396ba" Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.701860 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:44:00 crc kubenswrapper[4814]: I1202 14:44:00.701905 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.702007 4814 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.702065 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs podName:1d9fb058-e400-4c7c-af16-093ca5a5f1bc nodeName:}" failed. No retries permitted until 2025-12-02 14:44:02.702051403 +0000 UTC m=+904.826413607 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs") pod "openstack-operator-controller-manager-59c6849d55-jttnz" (UID: "1d9fb058-e400-4c7c-af16-093ca5a5f1bc") : secret "metrics-server-cert" not found Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.702005 4814 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:44:00 crc kubenswrapper[4814]: E1202 14:44:00.702124 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs podName:1d9fb058-e400-4c7c-af16-093ca5a5f1bc nodeName:}" failed. No retries permitted until 2025-12-02 14:44:02.702107034 +0000 UTC m=+904.826469238 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs") pod "openstack-operator-controller-manager-59c6849d55-jttnz" (UID: "1d9fb058-e400-4c7c-af16-093ca5a5f1bc") : secret "webhook-server-cert" not found Dec 02 14:44:01 crc kubenswrapper[4814]: I1202 14:44:01.401773 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-57776" event={"ID":"c288b87f-0e35-4b3f-b0f5-1565425396ba","Type":"ContainerStarted","Data":"94373fb9a970fcc9ef2d3e8c6a8ccaf89bbd179d2c65d9b7e5252b6369dc78d3"} Dec 02 14:44:01 crc kubenswrapper[4814]: E1202 14:44:01.405714 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-57776" podUID="c288b87f-0e35-4b3f-b0f5-1565425396ba" Dec 02 14:44:01 crc kubenswrapper[4814]: I1202 14:44:01.421059 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm" event={"ID":"55d8049a-e6f9-4ace-ada2-8fc96a06f1e4","Type":"ContainerStarted","Data":"4ab68ff67f9eb66e2f3ae313e83c350917834e861b401d46f3cdea4b279919a0"} Dec 02 14:44:01 crc kubenswrapper[4814]: E1202 14:44:01.425292 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm" podUID="55d8049a-e6f9-4ace-ada2-8fc96a06f1e4" Dec 02 14:44:01 crc kubenswrapper[4814]: I1202 14:44:01.430657 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-w6xr2" event={"ID":"94a7cab9-3ba9-49c1-b468-b42424e3c43c","Type":"ContainerStarted","Data":"3099d1a9536cb2bbcdd650e1a2d4f18199cba6a827f007c22ff262a8b91981f4"} Dec 02 14:44:01 crc kubenswrapper[4814]: I1202 14:44:01.438448 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" event={"ID":"4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad","Type":"ContainerStarted","Data":"df8bec736aa6848cba96695b2956fd3c344666166df3bec3ca247b699d9a5f59"} Dec 02 14:44:01 crc kubenswrapper[4814]: E1202 14:44:01.450812 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.158:5001/openstack-k8s-operators/watcher-operator:04f2b756afa9e575b2ebfa769c85094b932517cd\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" podUID="4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" Dec 02 14:44:01 crc kubenswrapper[4814]: I1202 14:44:01.451094 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-cskhk" event={"ID":"391bcd0f-854f-43c3-ac40-a917e68bf3b8","Type":"ContainerStarted","Data":"71f56706033c820ef991360fb254a692ce85917b5b019e362f5e44c442211a71"} Dec 02 14:44:01 crc kubenswrapper[4814]: E1202 14:44:01.455376 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl" podUID="ac66ba40-6166-47b9-9c47-f0f0637e774c" Dec 02 14:44:01 crc kubenswrapper[4814]: E1202 14:44:01.455516 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-cskhk" podUID="391bcd0f-854f-43c3-ac40-a917e68bf3b8" Dec 02 14:44:02 crc kubenswrapper[4814]: I1202 14:44:02.027735 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert\") pod \"infra-operator-controller-manager-57548d458d-h9gcz\" (UID: \"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:44:02 crc kubenswrapper[4814]: E1202 14:44:02.027940 4814 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:44:02 crc kubenswrapper[4814]: E1202 14:44:02.028127 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert podName:2f89b1a9-fb43-4ef5-a8f7-9a15b219022c nodeName:}" failed. No retries permitted until 2025-12-02 14:44:06.028109888 +0000 UTC m=+908.152472092 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert") pod "infra-operator-controller-manager-57548d458d-h9gcz" (UID: "2f89b1a9-fb43-4ef5-a8f7-9a15b219022c") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:44:02 crc kubenswrapper[4814]: I1202 14:44:02.333323 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld\" (UID: \"491246a7-1bdb-4efc-99a8-16ab9502d44d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:44:02 crc kubenswrapper[4814]: E1202 14:44:02.333499 4814 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:44:02 crc kubenswrapper[4814]: E1202 14:44:02.333555 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert podName:491246a7-1bdb-4efc-99a8-16ab9502d44d nodeName:}" failed. No retries permitted until 2025-12-02 14:44:06.333540844 +0000 UTC m=+908.457903048 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" (UID: "491246a7-1bdb-4efc-99a8-16ab9502d44d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:44:02 crc kubenswrapper[4814]: E1202 14:44:02.460225 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-57776" podUID="c288b87f-0e35-4b3f-b0f5-1565425396ba" Dec 02 14:44:02 crc kubenswrapper[4814]: E1202 14:44:02.461068 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.158:5001/openstack-k8s-operators/watcher-operator:04f2b756afa9e575b2ebfa769c85094b932517cd\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" podUID="4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" Dec 02 14:44:02 crc kubenswrapper[4814]: E1202 14:44:02.461930 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm" podUID="55d8049a-e6f9-4ace-ada2-8fc96a06f1e4" Dec 02 14:44:02 crc kubenswrapper[4814]: E1202 14:44:02.463841 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-cskhk" podUID="391bcd0f-854f-43c3-ac40-a917e68bf3b8" Dec 02 14:44:02 crc kubenswrapper[4814]: I1202 14:44:02.739786 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:44:02 crc kubenswrapper[4814]: I1202 14:44:02.739839 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:44:02 crc kubenswrapper[4814]: E1202 14:44:02.739980 4814 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:44:02 crc kubenswrapper[4814]: E1202 14:44:02.740044 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs podName:1d9fb058-e400-4c7c-af16-093ca5a5f1bc nodeName:}" failed. No retries permitted until 2025-12-02 14:44:06.740022756 +0000 UTC m=+908.864384960 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs") pod "openstack-operator-controller-manager-59c6849d55-jttnz" (UID: "1d9fb058-e400-4c7c-af16-093ca5a5f1bc") : secret "metrics-server-cert" not found Dec 02 14:44:02 crc kubenswrapper[4814]: E1202 14:44:02.740032 4814 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:44:02 crc kubenswrapper[4814]: E1202 14:44:02.740151 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs podName:1d9fb058-e400-4c7c-af16-093ca5a5f1bc nodeName:}" failed. No retries permitted until 2025-12-02 14:44:06.740126409 +0000 UTC m=+908.864488613 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs") pod "openstack-operator-controller-manager-59c6849d55-jttnz" (UID: "1d9fb058-e400-4c7c-af16-093ca5a5f1bc") : secret "webhook-server-cert" not found Dec 02 14:44:06 crc kubenswrapper[4814]: I1202 14:44:06.093187 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert\") pod \"infra-operator-controller-manager-57548d458d-h9gcz\" (UID: \"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:44:06 crc kubenswrapper[4814]: E1202 14:44:06.093495 4814 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:44:06 crc kubenswrapper[4814]: E1202 14:44:06.093915 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert podName:2f89b1a9-fb43-4ef5-a8f7-9a15b219022c nodeName:}" failed. No retries permitted until 2025-12-02 14:44:14.093878546 +0000 UTC m=+916.218240790 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert") pod "infra-operator-controller-manager-57548d458d-h9gcz" (UID: "2f89b1a9-fb43-4ef5-a8f7-9a15b219022c") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:44:06 crc kubenswrapper[4814]: I1202 14:44:06.399673 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld\" (UID: \"491246a7-1bdb-4efc-99a8-16ab9502d44d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:44:06 crc kubenswrapper[4814]: E1202 14:44:06.399866 4814 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:44:06 crc kubenswrapper[4814]: E1202 14:44:06.399949 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert podName:491246a7-1bdb-4efc-99a8-16ab9502d44d nodeName:}" failed. No retries permitted until 2025-12-02 14:44:14.399928956 +0000 UTC m=+916.524291160 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" (UID: "491246a7-1bdb-4efc-99a8-16ab9502d44d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:44:06 crc kubenswrapper[4814]: I1202 14:44:06.807005 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:44:06 crc kubenswrapper[4814]: I1202 14:44:06.807112 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:44:06 crc kubenswrapper[4814]: E1202 14:44:06.807177 4814 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:44:06 crc kubenswrapper[4814]: E1202 14:44:06.807310 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs podName:1d9fb058-e400-4c7c-af16-093ca5a5f1bc nodeName:}" failed. No retries permitted until 2025-12-02 14:44:14.807281569 +0000 UTC m=+916.931643853 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs") pod "openstack-operator-controller-manager-59c6849d55-jttnz" (UID: "1d9fb058-e400-4c7c-af16-093ca5a5f1bc") : secret "webhook-server-cert" not found Dec 02 14:44:06 crc kubenswrapper[4814]: E1202 14:44:06.807380 4814 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:44:06 crc kubenswrapper[4814]: E1202 14:44:06.807498 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs podName:1d9fb058-e400-4c7c-af16-093ca5a5f1bc nodeName:}" failed. No retries permitted until 2025-12-02 14:44:14.807464314 +0000 UTC m=+916.931826578 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs") pod "openstack-operator-controller-manager-59c6849d55-jttnz" (UID: "1d9fb058-e400-4c7c-af16-093ca5a5f1bc") : secret "metrics-server-cert" not found Dec 02 14:44:13 crc kubenswrapper[4814]: E1202 14:44:13.081284 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" Dec 02 14:44:13 crc kubenswrapper[4814]: E1202 14:44:13.081972 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zk67m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-rgnvj_openstack-operators(41513450-3327-470f-b978-b5fa13c72991): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:44:14 crc kubenswrapper[4814]: I1202 14:44:14.136716 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert\") pod \"infra-operator-controller-manager-57548d458d-h9gcz\" (UID: \"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:44:14 crc kubenswrapper[4814]: E1202 14:44:14.137713 4814 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:44:14 crc kubenswrapper[4814]: E1202 14:44:14.137854 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert podName:2f89b1a9-fb43-4ef5-a8f7-9a15b219022c nodeName:}" failed. No retries permitted until 2025-12-02 14:44:30.137834519 +0000 UTC m=+932.262196723 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert") pod "infra-operator-controller-manager-57548d458d-h9gcz" (UID: "2f89b1a9-fb43-4ef5-a8f7-9a15b219022c") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:44:14 crc kubenswrapper[4814]: I1202 14:44:14.442472 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld\" (UID: \"491246a7-1bdb-4efc-99a8-16ab9502d44d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:44:14 crc kubenswrapper[4814]: E1202 14:44:14.442676 4814 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:44:14 crc kubenswrapper[4814]: E1202 14:44:14.442798 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert podName:491246a7-1bdb-4efc-99a8-16ab9502d44d nodeName:}" failed. No retries permitted until 2025-12-02 14:44:30.442764843 +0000 UTC m=+932.567127087 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" (UID: "491246a7-1bdb-4efc-99a8-16ab9502d44d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:44:14 crc kubenswrapper[4814]: E1202 14:44:14.848696 4814 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:44:14 crc kubenswrapper[4814]: E1202 14:44:14.848830 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs podName:1d9fb058-e400-4c7c-af16-093ca5a5f1bc nodeName:}" failed. No retries permitted until 2025-12-02 14:44:30.848800065 +0000 UTC m=+932.973162309 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs") pod "openstack-operator-controller-manager-59c6849d55-jttnz" (UID: "1d9fb058-e400-4c7c-af16-093ca5a5f1bc") : secret "webhook-server-cert" not found Dec 02 14:44:14 crc kubenswrapper[4814]: I1202 14:44:14.848513 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:44:14 crc kubenswrapper[4814]: I1202 14:44:14.849495 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:44:14 crc kubenswrapper[4814]: E1202 14:44:14.849701 4814 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:44:14 crc kubenswrapper[4814]: E1202 14:44:14.849883 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs podName:1d9fb058-e400-4c7c-af16-093ca5a5f1bc nodeName:}" failed. No retries permitted until 2025-12-02 14:44:30.849842531 +0000 UTC m=+932.974204785 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs") pod "openstack-operator-controller-manager-59c6849d55-jttnz" (UID: "1d9fb058-e400-4c7c-af16-093ca5a5f1bc") : secret "metrics-server-cert" not found Dec 02 14:44:21 crc kubenswrapper[4814]: I1202 14:44:21.613383 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fgzmw"] Dec 02 14:44:21 crc kubenswrapper[4814]: I1202 14:44:21.618218 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:44:21 crc kubenswrapper[4814]: I1202 14:44:21.628211 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgzmw"] Dec 02 14:44:21 crc kubenswrapper[4814]: I1202 14:44:21.766719 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2364f74-72df-4afc-ab94-c119c821d5c8-utilities\") pod \"redhat-marketplace-fgzmw\" (UID: \"d2364f74-72df-4afc-ab94-c119c821d5c8\") " pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:44:21 crc kubenswrapper[4814]: I1202 14:44:21.767005 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2364f74-72df-4afc-ab94-c119c821d5c8-catalog-content\") pod \"redhat-marketplace-fgzmw\" (UID: \"d2364f74-72df-4afc-ab94-c119c821d5c8\") " pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:44:21 crc kubenswrapper[4814]: I1202 14:44:21.767437 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8wbm\" (UniqueName: \"kubernetes.io/projected/d2364f74-72df-4afc-ab94-c119c821d5c8-kube-api-access-h8wbm\") pod \"redhat-marketplace-fgzmw\" (UID: \"d2364f74-72df-4afc-ab94-c119c821d5c8\") " pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:44:21 crc kubenswrapper[4814]: I1202 14:44:21.871508 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8wbm\" (UniqueName: \"kubernetes.io/projected/d2364f74-72df-4afc-ab94-c119c821d5c8-kube-api-access-h8wbm\") pod \"redhat-marketplace-fgzmw\" (UID: \"d2364f74-72df-4afc-ab94-c119c821d5c8\") " pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:44:21 crc kubenswrapper[4814]: I1202 14:44:21.871943 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2364f74-72df-4afc-ab94-c119c821d5c8-utilities\") pod \"redhat-marketplace-fgzmw\" (UID: \"d2364f74-72df-4afc-ab94-c119c821d5c8\") " pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:44:21 crc kubenswrapper[4814]: I1202 14:44:21.872844 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2364f74-72df-4afc-ab94-c119c821d5c8-catalog-content\") pod \"redhat-marketplace-fgzmw\" (UID: \"d2364f74-72df-4afc-ab94-c119c821d5c8\") " pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:44:21 crc kubenswrapper[4814]: I1202 14:44:21.872614 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2364f74-72df-4afc-ab94-c119c821d5c8-utilities\") pod \"redhat-marketplace-fgzmw\" (UID: \"d2364f74-72df-4afc-ab94-c119c821d5c8\") " pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:44:21 crc kubenswrapper[4814]: I1202 14:44:21.873173 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2364f74-72df-4afc-ab94-c119c821d5c8-catalog-content\") pod \"redhat-marketplace-fgzmw\" (UID: \"d2364f74-72df-4afc-ab94-c119c821d5c8\") " pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:44:21 crc kubenswrapper[4814]: I1202 14:44:21.891381 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8wbm\" (UniqueName: \"kubernetes.io/projected/d2364f74-72df-4afc-ab94-c119c821d5c8-kube-api-access-h8wbm\") pod \"redhat-marketplace-fgzmw\" (UID: \"d2364f74-72df-4afc-ab94-c119c821d5c8\") " pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:44:21 crc kubenswrapper[4814]: I1202 14:44:21.983575 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:44:30 crc kubenswrapper[4814]: I1202 14:44:30.211137 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert\") pod \"infra-operator-controller-manager-57548d458d-h9gcz\" (UID: \"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:44:30 crc kubenswrapper[4814]: I1202 14:44:30.220348 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f89b1a9-fb43-4ef5-a8f7-9a15b219022c-cert\") pod \"infra-operator-controller-manager-57548d458d-h9gcz\" (UID: \"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:44:30 crc kubenswrapper[4814]: I1202 14:44:30.503568 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-25b2c" Dec 02 14:44:30 crc kubenswrapper[4814]: I1202 14:44:30.511526 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:44:30 crc kubenswrapper[4814]: I1202 14:44:30.515660 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld\" (UID: \"491246a7-1bdb-4efc-99a8-16ab9502d44d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:44:30 crc kubenswrapper[4814]: I1202 14:44:30.523489 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/491246a7-1bdb-4efc-99a8-16ab9502d44d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld\" (UID: \"491246a7-1bdb-4efc-99a8-16ab9502d44d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:44:30 crc kubenswrapper[4814]: I1202 14:44:30.735064 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:44:30 crc kubenswrapper[4814]: I1202 14:44:30.922029 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:44:30 crc kubenswrapper[4814]: I1202 14:44:30.922077 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:44:30 crc kubenswrapper[4814]: I1202 14:44:30.927850 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-metrics-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:44:30 crc kubenswrapper[4814]: I1202 14:44:30.927954 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d9fb058-e400-4c7c-af16-093ca5a5f1bc-webhook-certs\") pod \"openstack-operator-controller-manager-59c6849d55-jttnz\" (UID: \"1d9fb058-e400-4c7c-af16-093ca5a5f1bc\") " pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:44:31 crc kubenswrapper[4814]: E1202 14:44:31.096910 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 02 14:44:31 crc kubenswrapper[4814]: E1202 14:44:31.097094 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4m9bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-z92q9_openstack-operators(37189c89-1420-498e-bcb9-d42e0cd65ba9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:44:31 crc kubenswrapper[4814]: E1202 14:44:31.133668 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 02 14:44:31 crc kubenswrapper[4814]: E1202 14:44:31.134167 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wb4v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-9nxkr_openstack-operators(f36f9c57-13d7-48e0-b519-89e87670e082): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:44:31 crc kubenswrapper[4814]: I1202 14:44:31.181821 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:44:31 crc kubenswrapper[4814]: I1202 14:44:31.505761 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5l8nn"] Dec 02 14:44:31 crc kubenswrapper[4814]: I1202 14:44:31.508561 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:44:31 crc kubenswrapper[4814]: I1202 14:44:31.521188 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5l8nn"] Dec 02 14:44:31 crc kubenswrapper[4814]: I1202 14:44:31.634830 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-catalog-content\") pod \"community-operators-5l8nn\" (UID: \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\") " pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:44:31 crc kubenswrapper[4814]: I1202 14:44:31.634887 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-utilities\") pod \"community-operators-5l8nn\" (UID: \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\") " pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:44:31 crc kubenswrapper[4814]: I1202 14:44:31.634936 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2rjv\" (UniqueName: \"kubernetes.io/projected/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-kube-api-access-z2rjv\") pod \"community-operators-5l8nn\" (UID: \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\") " pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:44:31 crc kubenswrapper[4814]: I1202 14:44:31.737065 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-utilities\") pod \"community-operators-5l8nn\" (UID: \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\") " pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:44:31 crc kubenswrapper[4814]: I1202 14:44:31.737461 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2rjv\" (UniqueName: \"kubernetes.io/projected/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-kube-api-access-z2rjv\") pod \"community-operators-5l8nn\" (UID: \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\") " pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:44:31 crc kubenswrapper[4814]: I1202 14:44:31.737558 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-catalog-content\") pod \"community-operators-5l8nn\" (UID: \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\") " pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:44:31 crc kubenswrapper[4814]: I1202 14:44:31.737976 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-utilities\") pod \"community-operators-5l8nn\" (UID: \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\") " pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:44:31 crc kubenswrapper[4814]: I1202 14:44:31.737993 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-catalog-content\") pod \"community-operators-5l8nn\" (UID: \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\") " pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:44:31 crc kubenswrapper[4814]: I1202 14:44:31.767045 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2rjv\" (UniqueName: \"kubernetes.io/projected/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-kube-api-access-z2rjv\") pod \"community-operators-5l8nn\" (UID: \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\") " pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:44:31 crc kubenswrapper[4814]: I1202 14:44:31.835269 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:44:32 crc kubenswrapper[4814]: E1202 14:44:32.812408 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" Dec 02 14:44:32 crc kubenswrapper[4814]: E1202 14:44:32.812614 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bnf54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-7ttvw_openstack-operators(2bd51779-c0ec-438d-ad69-a737752b9b90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:44:33 crc kubenswrapper[4814]: E1202 14:44:33.728827 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" Dec 02 14:44:33 crc kubenswrapper[4814]: E1202 14:44:33.729060 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bblsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-q7sbm_openstack-operators(b362d2bd-46b7-478b-af11-125a0d2fd5b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:44:34 crc kubenswrapper[4814]: E1202 14:44:34.210901 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 02 14:44:34 crc kubenswrapper[4814]: E1202 14:44:34.211092 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkvh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-l5bdj_openstack-operators(9c15f550-1056-421a-aee0-301d768590c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:44:34 crc kubenswrapper[4814]: E1202 14:44:34.910220 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 02 14:44:34 crc kubenswrapper[4814]: E1202 14:44:34.910410 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nzwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-kxkhr_openstack-operators(8e50493d-4c39-4cb7-b752-714c0520e559): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:44:35 crc kubenswrapper[4814]: E1202 14:44:35.418456 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9" Dec 02 14:44:35 crc kubenswrapper[4814]: E1202 14:44:35.418642 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7qqgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-zbs77_openstack-operators(5d626448-a1ab-4d7a-bc09-d0777abad82d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:44:38 crc kubenswrapper[4814]: E1202 14:44:38.841233 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 02 14:44:38 crc kubenswrapper[4814]: E1202 14:44:38.841624 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-87h64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-ngzqv_openstack-operators(88f105c6-4510-46e0-8be8-dc360a9b73ce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:44:42 crc kubenswrapper[4814]: I1202 14:44:42.727614 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld"] Dec 02 14:44:42 crc kubenswrapper[4814]: I1202 14:44:42.955074 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgzmw"] Dec 02 14:44:43 crc kubenswrapper[4814]: W1202 14:44:43.658498 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2364f74_72df_4afc_ab94_c119c821d5c8.slice/crio-914541b6c80071bc2a9a9f2407d9ef958ce8f28e9357880d2ec0e0b4e485bd37 WatchSource:0}: Error finding container 914541b6c80071bc2a9a9f2407d9ef958ce8f28e9357880d2ec0e0b4e485bd37: Status 404 returned error can't find the container with id 914541b6c80071bc2a9a9f2407d9ef958ce8f28e9357880d2ec0e0b4e485bd37 Dec 02 14:44:43 crc kubenswrapper[4814]: I1202 14:44:43.849330 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgzmw" event={"ID":"d2364f74-72df-4afc-ab94-c119c821d5c8","Type":"ContainerStarted","Data":"914541b6c80071bc2a9a9f2407d9ef958ce8f28e9357880d2ec0e0b4e485bd37"} Dec 02 14:44:43 crc kubenswrapper[4814]: I1202 14:44:43.850193 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" event={"ID":"491246a7-1bdb-4efc-99a8-16ab9502d44d","Type":"ContainerStarted","Data":"2d7a9c65a2a22ae796a7a24635e54dcea73e2bc60dc92b1e653650226a1142c4"} Dec 02 14:44:43 crc kubenswrapper[4814]: I1202 14:44:43.970219 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz"] Dec 02 14:44:44 crc kubenswrapper[4814]: I1202 14:44:44.099986 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5l8nn"] Dec 02 14:44:44 crc kubenswrapper[4814]: I1202 14:44:44.118628 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz"] Dec 02 14:44:44 crc kubenswrapper[4814]: W1202 14:44:44.853032 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f89b1a9_fb43_4ef5_a8f7_9a15b219022c.slice/crio-7b1d28250fd5e3085323626b59445a1c9c9586abfa607e7d6120e414bba68383 WatchSource:0}: Error finding container 7b1d28250fd5e3085323626b59445a1c9c9586abfa607e7d6120e414bba68383: Status 404 returned error can't find the container with id 7b1d28250fd5e3085323626b59445a1c9c9586abfa607e7d6120e414bba68383 Dec 02 14:44:44 crc kubenswrapper[4814]: W1202 14:44:44.873844 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19aaaf80_2786_4bf7_97d9_3a8e2d2e1e28.slice/crio-742066b3d30e9258ec36b0a6fb1f426fd9a87124d65eff615082752ed52c9d15 WatchSource:0}: Error finding container 742066b3d30e9258ec36b0a6fb1f426fd9a87124d65eff615082752ed52c9d15: Status 404 returned error can't find the container with id 742066b3d30e9258ec36b0a6fb1f426fd9a87124d65eff615082752ed52c9d15 Dec 02 14:44:45 crc kubenswrapper[4814]: I1202 14:44:45.865023 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5l8nn" event={"ID":"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28","Type":"ContainerStarted","Data":"742066b3d30e9258ec36b0a6fb1f426fd9a87124d65eff615082752ed52c9d15"} Dec 02 14:44:45 crc kubenswrapper[4814]: I1202 14:44:45.866766 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" event={"ID":"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c","Type":"ContainerStarted","Data":"7b1d28250fd5e3085323626b59445a1c9c9586abfa607e7d6120e414bba68383"} Dec 02 14:44:45 crc kubenswrapper[4814]: W1202 14:44:45.996516 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d9fb058_e400_4c7c_af16_093ca5a5f1bc.slice/crio-0ffe7be31fd0203a72feaa2b4f1f623eff0285212e3b13e5af3f3c3605bb5321 WatchSource:0}: Error finding container 0ffe7be31fd0203a72feaa2b4f1f623eff0285212e3b13e5af3f3c3605bb5321: Status 404 returned error can't find the container with id 0ffe7be31fd0203a72feaa2b4f1f623eff0285212e3b13e5af3f3c3605bb5321 Dec 02 14:44:46 crc kubenswrapper[4814]: I1202 14:44:46.877377 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sclz2" event={"ID":"ce263a59-b81e-4394-9dbd-fe091bc7075f","Type":"ContainerStarted","Data":"257d85c2673bdf87aad169efb89dceaf8ff749695eddd52b93d45054ef42100e"} Dec 02 14:44:46 crc kubenswrapper[4814]: I1202 14:44:46.878872 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-knqvz" event={"ID":"a72143d5-40c2-4fbe-87de-c3be61143c9e","Type":"ContainerStarted","Data":"35e102a48257238631f3b5c20d20d6a6206733e0c27240463fb625c63c4c2151"} Dec 02 14:44:46 crc kubenswrapper[4814]: I1202 14:44:46.879858 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hkkjn" event={"ID":"d9fc1bb3-efc9-4832-a2d6-b60b3565fb54","Type":"ContainerStarted","Data":"1fe19ec5f9b7da8ef39e08bfd30d9abb84bde0d85c23124eae9aea0231c157b9"} Dec 02 14:44:46 crc kubenswrapper[4814]: I1202 14:44:46.880903 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl" event={"ID":"ac66ba40-6166-47b9-9c47-f0f0637e774c","Type":"ContainerStarted","Data":"2cef13906b26e34790d3b0ee6eb2ae8b890e62211aa943912b8af8b86564f561"} Dec 02 14:44:46 crc kubenswrapper[4814]: I1202 14:44:46.882088 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" event={"ID":"1d9fb058-e400-4c7c-af16-093ca5a5f1bc","Type":"ContainerStarted","Data":"0ffe7be31fd0203a72feaa2b4f1f623eff0285212e3b13e5af3f3c3605bb5321"} Dec 02 14:44:46 crc kubenswrapper[4814]: I1202 14:44:46.883113 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-w6xr2" event={"ID":"94a7cab9-3ba9-49c1-b468-b42424e3c43c","Type":"ContainerStarted","Data":"6a4d52cd768d2d36e18ec4341bb1ddd85e490825f4e7c1898dcc64fcd42bb119"} Dec 02 14:44:46 crc kubenswrapper[4814]: I1202 14:44:46.884215 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d268c" event={"ID":"0848a668-6f55-429e-951d-ef285f20a1b6","Type":"ContainerStarted","Data":"c12298a59b914417b24b6de5e29c89d8d4f5668b9484cfda571bbcac9ce83d3d"} Dec 02 14:44:46 crc kubenswrapper[4814]: I1202 14:44:46.885152 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-57bd6" event={"ID":"a2a6620c-b75b-4223-ac5c-721a57d69a06","Type":"ContainerStarted","Data":"6c281d491f8b2d122bbfc67d3a573a1e1e85e61fcc9d609915dc6a1881bd1e48"} Dec 02 14:44:47 crc kubenswrapper[4814]: I1202 14:44:47.927848 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" event={"ID":"4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad","Type":"ContainerStarted","Data":"5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc"} Dec 02 14:44:47 crc kubenswrapper[4814]: I1202 14:44:47.932467 4814 generic.go:334] "Generic (PLEG): container finished" podID="19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" containerID="24a7c9ae6ef0af07c35f96154cc716afdecc1399bed5b8e666f40c93eeec5de6" exitCode=0 Dec 02 14:44:47 crc kubenswrapper[4814]: I1202 14:44:47.932503 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5l8nn" event={"ID":"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28","Type":"ContainerDied","Data":"24a7c9ae6ef0af07c35f96154cc716afdecc1399bed5b8e666f40c93eeec5de6"} Dec 02 14:44:47 crc kubenswrapper[4814]: I1202 14:44:47.941515 4814 generic.go:334] "Generic (PLEG): container finished" podID="d2364f74-72df-4afc-ab94-c119c821d5c8" containerID="f14215ef45dc4a04746e44c94252b2e4c2b6d59df07b7730603d3879876d18f7" exitCode=0 Dec 02 14:44:47 crc kubenswrapper[4814]: I1202 14:44:47.941603 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgzmw" event={"ID":"d2364f74-72df-4afc-ab94-c119c821d5c8","Type":"ContainerDied","Data":"f14215ef45dc4a04746e44c94252b2e4c2b6d59df07b7730603d3879876d18f7"} Dec 02 14:44:47 crc kubenswrapper[4814]: I1202 14:44:47.948024 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm" event={"ID":"55d8049a-e6f9-4ace-ada2-8fc96a06f1e4","Type":"ContainerStarted","Data":"db2a9a0ab3347904679d5c5e624aad75af953d7ecb87d73775595bd087396401"} Dec 02 14:44:47 crc kubenswrapper[4814]: I1202 14:44:47.956922 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" event={"ID":"1d9fb058-e400-4c7c-af16-093ca5a5f1bc","Type":"ContainerStarted","Data":"e4995543aca1651540651756c3bd7c47cb4c9903a0dfed28349b68b426de405f"} Dec 02 14:44:47 crc kubenswrapper[4814]: I1202 14:44:47.957424 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:44:48 crc kubenswrapper[4814]: I1202 14:44:48.030621 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" podStartSLOduration=50.0305878 podStartE2EDuration="50.0305878s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:44:48.026916278 +0000 UTC m=+950.151278482" watchObservedRunningTime="2025-12-02 14:44:48.0305878 +0000 UTC m=+950.154950004" Dec 02 14:44:48 crc kubenswrapper[4814]: I1202 14:44:48.988010 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-57776" event={"ID":"c288b87f-0e35-4b3f-b0f5-1565425396ba","Type":"ContainerStarted","Data":"781ca9aec50217b84590b922cf908bc9f422922a8d7b11e4fe64681a4c4d3583"} Dec 02 14:44:48 crc kubenswrapper[4814]: I1202 14:44:48.991899 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-cskhk" event={"ID":"391bcd0f-854f-43c3-ac40-a917e68bf3b8","Type":"ContainerStarted","Data":"29da4bb02cb7d2d7e606e3153ae67aadcd809c5578207420e95c6871d000b5f4"} Dec 02 14:44:49 crc kubenswrapper[4814]: I1202 14:44:49.003460 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-57776" podStartSLOduration=5.507746645 podStartE2EDuration="51.003438327s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:44:00.515330012 +0000 UTC m=+902.639692216" lastFinishedPulling="2025-12-02 14:44:46.011021694 +0000 UTC m=+948.135383898" observedRunningTime="2025-12-02 14:44:49.001082998 +0000 UTC m=+951.125445202" watchObservedRunningTime="2025-12-02 14:44:49.003438327 +0000 UTC m=+951.127800531" Dec 02 14:44:50 crc kubenswrapper[4814]: I1202 14:44:50.650943 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:44:50 crc kubenswrapper[4814]: I1202 14:44:50.651043 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:44:52 crc kubenswrapper[4814]: I1202 14:44:52.023435 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-knqvz" event={"ID":"a72143d5-40c2-4fbe-87de-c3be61143c9e","Type":"ContainerStarted","Data":"1ff06b431628f14faee14e7081f9c0cd1ae0b26a033bc51eda384732a8d2aae4"} Dec 02 14:44:52 crc kubenswrapper[4814]: I1202 14:44:52.024347 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-knqvz" Dec 02 14:44:52 crc kubenswrapper[4814]: I1202 14:44:52.027138 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-knqvz" Dec 02 14:44:52 crc kubenswrapper[4814]: I1202 14:44:52.027913 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-57bd6" event={"ID":"a2a6620c-b75b-4223-ac5c-721a57d69a06","Type":"ContainerStarted","Data":"2675a583f03cc09b7815530ea4ac49ac8595ba32023a25e6b812941af3bc0701"} Dec 02 14:44:52 crc kubenswrapper[4814]: I1202 14:44:52.028325 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-57bd6" Dec 02 14:44:52 crc kubenswrapper[4814]: I1202 14:44:52.030814 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-57bd6" Dec 02 14:44:52 crc kubenswrapper[4814]: I1202 14:44:52.050595 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-knqvz" podStartSLOduration=3.062853925 podStartE2EDuration="55.050576733s" podCreationTimestamp="2025-12-02 14:43:57 +0000 UTC" firstStartedPulling="2025-12-02 14:43:59.420056708 +0000 UTC m=+901.544418912" lastFinishedPulling="2025-12-02 14:44:51.407779506 +0000 UTC m=+953.532141720" observedRunningTime="2025-12-02 14:44:52.045977659 +0000 UTC m=+954.170339863" watchObservedRunningTime="2025-12-02 14:44:52.050576733 +0000 UTC m=+954.174938937" Dec 02 14:44:52 crc kubenswrapper[4814]: I1202 14:44:52.116088 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-57bd6" podStartSLOduration=3.151900492 podStartE2EDuration="55.116057154s" podCreationTimestamp="2025-12-02 14:43:57 +0000 UTC" firstStartedPulling="2025-12-02 14:43:59.420640141 +0000 UTC m=+901.545002345" lastFinishedPulling="2025-12-02 14:44:51.384796783 +0000 UTC m=+953.509159007" observedRunningTime="2025-12-02 14:44:52.101900102 +0000 UTC m=+954.226262306" watchObservedRunningTime="2025-12-02 14:44:52.116057154 +0000 UTC m=+954.240419368" Dec 02 14:44:53 crc kubenswrapper[4814]: E1202 14:44:53.194339 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw" podUID="2bd51779-c0ec-438d-ad69-a737752b9b90" Dec 02 14:44:53 crc kubenswrapper[4814]: E1202 14:44:53.459384 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj" podUID="41513450-3327-470f-b978-b5fa13c72991" Dec 02 14:44:53 crc kubenswrapper[4814]: E1202 14:44:53.773919 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm" podUID="b362d2bd-46b7-478b-af11-125a0d2fd5b0" Dec 02 14:44:53 crc kubenswrapper[4814]: E1202 14:44:53.976300 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77" podUID="5d626448-a1ab-4d7a-bc09-d0777abad82d" Dec 02 14:44:54 crc kubenswrapper[4814]: E1202 14:44:54.052155 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9" podUID="37189c89-1420-498e-bcb9-d42e0cd65ba9" Dec 02 14:44:54 crc kubenswrapper[4814]: E1202 14:44:54.054054 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr" podUID="f36f9c57-13d7-48e0-b519-89e87670e082" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.067090 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm" event={"ID":"55d8049a-e6f9-4ace-ada2-8fc96a06f1e4","Type":"ContainerStarted","Data":"ecc452ed9842a8e731b9b2cf1c4f920e9f2f0ae6b163499e08ff89724b6ac0d9"} Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.068404 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.071604 4814 generic.go:334] "Generic (PLEG): container finished" podID="19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" containerID="2514f64af66e8ebb342004e053e935e0adea98bc5eb79870f2389fa410bc6bde" exitCode=0 Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.071651 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5l8nn" event={"ID":"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28","Type":"ContainerDied","Data":"2514f64af66e8ebb342004e053e935e0adea98bc5eb79870f2389fa410bc6bde"} Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.073194 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.074167 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm" event={"ID":"b362d2bd-46b7-478b-af11-125a0d2fd5b0","Type":"ContainerStarted","Data":"204323dbbabba274070869ec2fcdb3c508ff4846735f9e36bcb92b2c2a7f49e1"} Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.078438 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl" event={"ID":"ac66ba40-6166-47b9-9c47-f0f0637e774c","Type":"ContainerStarted","Data":"ab696f520d66f1249ae4ca0df600c5f24700407484ff6e4c365a91e6f6268803"} Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.079154 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.081926 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.082059 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" event={"ID":"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c","Type":"ContainerStarted","Data":"90871ed62c5073324d716abd29f158700ddf935f88b7a22055bb89c5e0981c94"} Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.103611 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-cskhk" event={"ID":"391bcd0f-854f-43c3-ac40-a917e68bf3b8","Type":"ContainerStarted","Data":"bb79231e28aa5adeca44ed966eb7704cdab415b3b4c564d7f10a6b297334d380"} Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.104314 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-cskhk" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.109837 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-cskhk" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.109838 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mrbvm" podStartSLOduration=3.6874620350000002 podStartE2EDuration="56.109825808s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:44:00.490447224 +0000 UTC m=+902.614809428" lastFinishedPulling="2025-12-02 14:44:52.912810997 +0000 UTC m=+955.037173201" observedRunningTime="2025-12-02 14:44:54.096900196 +0000 UTC m=+956.221262400" watchObservedRunningTime="2025-12-02 14:44:54.109825808 +0000 UTC m=+956.234188012" Dec 02 14:44:54 crc kubenswrapper[4814]: E1202 14:44:54.119933 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr" podUID="8e50493d-4c39-4cb7-b752-714c0520e559" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.121517 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" event={"ID":"491246a7-1bdb-4efc-99a8-16ab9502d44d","Type":"ContainerStarted","Data":"da2d34b71d7fe62535a6619db4cd75d796907ae6b765b8ad8de404e43cbee375"} Dec 02 14:44:54 crc kubenswrapper[4814]: E1202 14:44:54.273159 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv" podUID="88f105c6-4510-46e0-8be8-dc360a9b73ce" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.276733 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sclz2" event={"ID":"ce263a59-b81e-4394-9dbd-fe091bc7075f","Type":"ContainerStarted","Data":"7855a46aa7c1b38ea70d0ca5ee28a2e6bdd7f87d8175ee4cf0438d511a3c5446"} Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.276797 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sclz2" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.276839 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sclz2" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.311533 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-fj2tl" podStartSLOduration=3.975303922 podStartE2EDuration="56.311513251s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:44:00.319526716 +0000 UTC m=+902.443888920" lastFinishedPulling="2025-12-02 14:44:52.655736025 +0000 UTC m=+954.780098249" observedRunningTime="2025-12-02 14:44:54.213996332 +0000 UTC m=+956.338358536" watchObservedRunningTime="2025-12-02 14:44:54.311513251 +0000 UTC m=+956.435875455" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.312297 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9" event={"ID":"37189c89-1420-498e-bcb9-d42e0cd65ba9","Type":"ContainerStarted","Data":"2c81d92cbe63ad5cc522a4421f2904c4da96fc77e15719e6c718339f0693b819"} Dec 02 14:44:54 crc kubenswrapper[4814]: E1202 14:44:54.312380 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj" podUID="9c15f550-1056-421a-aee0-301d768590c6" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.326668 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj" event={"ID":"41513450-3327-470f-b978-b5fa13c72991","Type":"ContainerStarted","Data":"537f5f7968bd203b10d9591a0f2293befdd234bdc0a308613fbee06af3143870"} Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.363219 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr" event={"ID":"f36f9c57-13d7-48e0-b519-89e87670e082","Type":"ContainerStarted","Data":"86afedb457eeab50afab80965cbe8cdad311e47a9ab4dcabc7ff254acf2cd6f5"} Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.365170 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-cskhk" podStartSLOduration=4.053587515 podStartE2EDuration="56.365148116s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:44:00.512042054 +0000 UTC m=+902.636404258" lastFinishedPulling="2025-12-02 14:44:52.823602655 +0000 UTC m=+954.947964859" observedRunningTime="2025-12-02 14:44:54.28536317 +0000 UTC m=+956.409725374" watchObservedRunningTime="2025-12-02 14:44:54.365148116 +0000 UTC m=+956.489510320" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.371716 4814 generic.go:334] "Generic (PLEG): container finished" podID="d2364f74-72df-4afc-ab94-c119c821d5c8" containerID="4a3893841a9f9e4ca74235f6ece7fcc48f652f63586b35861bf963b7ff0e8d22" exitCode=0 Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.371777 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgzmw" event={"ID":"d2364f74-72df-4afc-ab94-c119c821d5c8","Type":"ContainerDied","Data":"4a3893841a9f9e4ca74235f6ece7fcc48f652f63586b35861bf963b7ff0e8d22"} Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.373286 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sclz2" podStartSLOduration=3.76037918 podStartE2EDuration="56.373271409s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:43:59.405699198 +0000 UTC m=+901.530061402" lastFinishedPulling="2025-12-02 14:44:52.018591427 +0000 UTC m=+954.142953631" observedRunningTime="2025-12-02 14:44:54.337284322 +0000 UTC m=+956.461646526" watchObservedRunningTime="2025-12-02 14:44:54.373271409 +0000 UTC m=+956.497633613" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.379175 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw" event={"ID":"2bd51779-c0ec-438d-ad69-a737752b9b90","Type":"ContainerStarted","Data":"247edaa426ab100140a3e78a7ff3ff3e7deecae10295ed87fed12696461f599c"} Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.427731 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hkkjn" event={"ID":"d9fc1bb3-efc9-4832-a2d6-b60b3565fb54","Type":"ContainerStarted","Data":"2859c5bf40bd233c8363e2a1a1d8f2cff488a7a0608d2ccf03d2709642d2a373"} Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.428387 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hkkjn" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.440144 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hkkjn" Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.445090 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77" event={"ID":"5d626448-a1ab-4d7a-bc09-d0777abad82d","Type":"ContainerStarted","Data":"3549c7a3b007274ac01612b6ce00e3c2ce463d343c9827460e1c7dedc19ac34e"} Dec 02 14:44:54 crc kubenswrapper[4814]: I1202 14:44:54.486300 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hkkjn" podStartSLOduration=4.023205693 podStartE2EDuration="57.486282903s" podCreationTimestamp="2025-12-02 14:43:57 +0000 UTC" firstStartedPulling="2025-12-02 14:43:59.420081898 +0000 UTC m=+901.544444102" lastFinishedPulling="2025-12-02 14:44:52.883159108 +0000 UTC m=+955.007521312" observedRunningTime="2025-12-02 14:44:54.484511109 +0000 UTC m=+956.608873313" watchObservedRunningTime="2025-12-02 14:44:54.486282903 +0000 UTC m=+956.610645107" Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.452087 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" event={"ID":"491246a7-1bdb-4efc-99a8-16ab9502d44d","Type":"ContainerStarted","Data":"b34ee70478c4a549acfc6b25ab518d02ca926de47ac5f3c9107519a145e6f6e5"} Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.453132 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.454835 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm" event={"ID":"b362d2bd-46b7-478b-af11-125a0d2fd5b0","Type":"ContainerStarted","Data":"354daf8b64ecb08af147e8cfda6188a92c5e3c41fe5995510cc128ae46aa4975"} Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.455588 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm" Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.457115 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-w6xr2" event={"ID":"94a7cab9-3ba9-49c1-b468-b42424e3c43c","Type":"ContainerStarted","Data":"ad2805c6c2649775045f5ce8727521c7db4663c55eba1f52689afe0b8f799b65"} Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.457525 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-w6xr2" Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.459063 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-w6xr2" Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.462839 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" event={"ID":"2f89b1a9-fb43-4ef5-a8f7-9a15b219022c","Type":"ContainerStarted","Data":"dbad5fcfbfec89a8f3b81d1c94417d410dac311c7d837f3749c95fceac2c35fd"} Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.462949 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.465026 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" event={"ID":"4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad","Type":"ContainerStarted","Data":"9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66"} Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.465739 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.467411 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv" event={"ID":"88f105c6-4510-46e0-8be8-dc360a9b73ce","Type":"ContainerStarted","Data":"933985cfe376133962d84287a3dfc05ff12d2107eed47bc77e21eadb9463f9d9"} Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.470844 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5l8nn" event={"ID":"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28","Type":"ContainerStarted","Data":"cc838c571627c1c552509cf019ea81fb4e2f62c6c0a25a7480d739d09d54f054"} Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.474381 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.475086 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgzmw" event={"ID":"d2364f74-72df-4afc-ab94-c119c821d5c8","Type":"ContainerStarted","Data":"ee2f250aeccc1c1432a52410ca07bda3b6a7cfd0cffe2837882c1070925cb29c"} Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.476664 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr" event={"ID":"8e50493d-4c39-4cb7-b752-714c0520e559","Type":"ContainerStarted","Data":"d79bed7de4c114d5820294dc176fe70c4916a4c65045353a85fbef57c2a74faf"} Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.479082 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj" event={"ID":"41513450-3327-470f-b978-b5fa13c72991","Type":"ContainerStarted","Data":"36ccdf5ec6f4303bdda8336f014fe9f30f7626c91c050fd3087d8c48c368e206"} Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.485126 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj" event={"ID":"9c15f550-1056-421a-aee0-301d768590c6","Type":"ContainerStarted","Data":"03f7ec17aa21ef5cdd0c102339e331a7b889ac07dfad1ede35cdb3d0468fa3ff"} Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.485253 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" podStartSLOduration=48.304183865 podStartE2EDuration="57.485214641s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:44:43.642513978 +0000 UTC m=+945.766876182" lastFinishedPulling="2025-12-02 14:44:52.823544754 +0000 UTC m=+954.947906958" observedRunningTime="2025-12-02 14:44:55.479890058 +0000 UTC m=+957.604252263" watchObservedRunningTime="2025-12-02 14:44:55.485214641 +0000 UTC m=+957.609576845" Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.490581 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d268c" event={"ID":"0848a668-6f55-429e-951d-ef285f20a1b6","Type":"ContainerStarted","Data":"8e48e416166b016fd5e39fc0d62e8f2f3bc4a1b8ec01012f0e87d3e79b400049"} Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.503329 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-w6xr2" podStartSLOduration=5.087320729 podStartE2EDuration="57.503310152s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:44:00.427453926 +0000 UTC m=+902.551816120" lastFinishedPulling="2025-12-02 14:44:52.843443329 +0000 UTC m=+954.967805543" observedRunningTime="2025-12-02 14:44:55.497899887 +0000 UTC m=+957.622262091" watchObservedRunningTime="2025-12-02 14:44:55.503310152 +0000 UTC m=+957.627672356" Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.530806 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5l8nn" podStartSLOduration=17.905746796 podStartE2EDuration="24.530788746s" podCreationTimestamp="2025-12-02 14:44:31 +0000 UTC" firstStartedPulling="2025-12-02 14:44:47.938064066 +0000 UTC m=+950.062426270" lastFinishedPulling="2025-12-02 14:44:54.563106016 +0000 UTC m=+956.687468220" observedRunningTime="2025-12-02 14:44:55.528776856 +0000 UTC m=+957.653139070" watchObservedRunningTime="2025-12-02 14:44:55.530788746 +0000 UTC m=+957.655150940" Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.554025 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm" podStartSLOduration=2.93286544 podStartE2EDuration="57.554006854s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:44:00.062490294 +0000 UTC m=+902.186852498" lastFinishedPulling="2025-12-02 14:44:54.683631708 +0000 UTC m=+956.807993912" observedRunningTime="2025-12-02 14:44:55.550000655 +0000 UTC m=+957.674362869" watchObservedRunningTime="2025-12-02 14:44:55.554006854 +0000 UTC m=+957.678369058" Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.605994 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" podStartSLOduration=4.996139363 podStartE2EDuration="57.605978068s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:44:00.501344492 +0000 UTC m=+902.625706696" lastFinishedPulling="2025-12-02 14:44:53.111183177 +0000 UTC m=+955.235545401" observedRunningTime="2025-12-02 14:44:55.601591058 +0000 UTC m=+957.725953292" watchObservedRunningTime="2025-12-02 14:44:55.605978068 +0000 UTC m=+957.730340272" Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.664375 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" podStartSLOduration=49.747331474 podStartE2EDuration="57.664357782s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:44:44.868486629 +0000 UTC m=+946.992848843" lastFinishedPulling="2025-12-02 14:44:52.785512957 +0000 UTC m=+954.909875151" observedRunningTime="2025-12-02 14:44:55.661453249 +0000 UTC m=+957.785815453" watchObservedRunningTime="2025-12-02 14:44:55.664357782 +0000 UTC m=+957.788719986" Dec 02 14:44:55 crc kubenswrapper[4814]: I1202 14:44:55.685153 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fgzmw" podStartSLOduration=27.669791818 podStartE2EDuration="34.685133179s" podCreationTimestamp="2025-12-02 14:44:21 +0000 UTC" firstStartedPulling="2025-12-02 14:44:47.942980858 +0000 UTC m=+950.067343062" lastFinishedPulling="2025-12-02 14:44:54.958322229 +0000 UTC m=+957.082684423" observedRunningTime="2025-12-02 14:44:55.682426601 +0000 UTC m=+957.806788815" watchObservedRunningTime="2025-12-02 14:44:55.685133179 +0000 UTC m=+957.809495383" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.497366 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw" event={"ID":"2bd51779-c0ec-438d-ad69-a737752b9b90","Type":"ContainerStarted","Data":"48823623dd56f8a4acb5a1766f399bddf638a4b8cbd1b3e12d794d9780c55afd"} Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.498577 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.500618 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv" event={"ID":"88f105c6-4510-46e0-8be8-dc360a9b73ce","Type":"ContainerStarted","Data":"a2e9474747c02c4dcf1f27af678d8f42bc78289a407075530f363ce106a4823a"} Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.500761 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.502354 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr" event={"ID":"f36f9c57-13d7-48e0-b519-89e87670e082","Type":"ContainerStarted","Data":"97af88544992e89b7cb93f7eec5a350fab7770961e1bc935662c3d841d552971"} Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.502839 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.505048 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77" event={"ID":"5d626448-a1ab-4d7a-bc09-d0777abad82d","Type":"ContainerStarted","Data":"ea79bf3069a677aee4d92aebeab85eae230fb1ff24724462b84f425829fd75b5"} Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.505462 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.506661 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9" event={"ID":"37189c89-1420-498e-bcb9-d42e0cd65ba9","Type":"ContainerStarted","Data":"3cdef24330dbc2ac5bf36a15afc5287e51002d2640817b871fe3e2c6aa758fba"} Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.507004 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.508258 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr" event={"ID":"8e50493d-4c39-4cb7-b752-714c0520e559","Type":"ContainerStarted","Data":"e6f05b9e44e29fb33b243df50bfe6c96604391406771189e12c803d8c9f6c2fc"} Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.508628 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.510640 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj" event={"ID":"9c15f550-1056-421a-aee0-301d768590c6","Type":"ContainerStarted","Data":"5ef6a883e5bf5d1208f489e535e6ad4caf0a32fbaa88cf44893c33434fb0e3b4"} Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.510666 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.513461 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d268c" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.514780 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d268c" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.523686 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d268c" podStartSLOduration=5.867551129 podStartE2EDuration="59.523675152s" podCreationTimestamp="2025-12-02 14:43:57 +0000 UTC" firstStartedPulling="2025-12-02 14:43:59.400523226 +0000 UTC m=+901.524885430" lastFinishedPulling="2025-12-02 14:44:53.056647249 +0000 UTC m=+955.181009453" observedRunningTime="2025-12-02 14:44:55.739979355 +0000 UTC m=+957.864341549" watchObservedRunningTime="2025-12-02 14:44:56.523675152 +0000 UTC m=+958.648037346" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.553583 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj" podStartSLOduration=3.035956896 podStartE2EDuration="58.553561327s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:43:59.485188456 +0000 UTC m=+901.609550660" lastFinishedPulling="2025-12-02 14:44:55.002792887 +0000 UTC m=+957.127155091" observedRunningTime="2025-12-02 14:44:56.551590678 +0000 UTC m=+958.675952872" watchObservedRunningTime="2025-12-02 14:44:56.553561327 +0000 UTC m=+958.677923531" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.556025 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw" podStartSLOduration=3.283872058 podStartE2EDuration="59.556018538s" podCreationTimestamp="2025-12-02 14:43:57 +0000 UTC" firstStartedPulling="2025-12-02 14:43:59.484716665 +0000 UTC m=+901.609078869" lastFinishedPulling="2025-12-02 14:44:55.756863145 +0000 UTC m=+957.881225349" observedRunningTime="2025-12-02 14:44:56.529132298 +0000 UTC m=+958.653494502" watchObservedRunningTime="2025-12-02 14:44:56.556018538 +0000 UTC m=+958.680380742" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.609079 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77" podStartSLOduration=3.150576761 podStartE2EDuration="58.609061699s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:44:00.297590538 +0000 UTC m=+902.421952762" lastFinishedPulling="2025-12-02 14:44:55.756075496 +0000 UTC m=+957.880437700" observedRunningTime="2025-12-02 14:44:56.605882889 +0000 UTC m=+958.730245093" watchObservedRunningTime="2025-12-02 14:44:56.609061699 +0000 UTC m=+958.733423903" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.624488 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9" podStartSLOduration=4.003847765 podStartE2EDuration="59.624473292s" podCreationTimestamp="2025-12-02 14:43:57 +0000 UTC" firstStartedPulling="2025-12-02 14:43:59.485867042 +0000 UTC m=+901.610229246" lastFinishedPulling="2025-12-02 14:44:55.106492569 +0000 UTC m=+957.230854773" observedRunningTime="2025-12-02 14:44:56.621610101 +0000 UTC m=+958.745972305" watchObservedRunningTime="2025-12-02 14:44:56.624473292 +0000 UTC m=+958.748835486" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.645824 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr" podStartSLOduration=2.964637553 podStartE2EDuration="58.645806024s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:44:00.318634215 +0000 UTC m=+902.442996419" lastFinishedPulling="2025-12-02 14:44:55.999802696 +0000 UTC m=+958.124164890" observedRunningTime="2025-12-02 14:44:56.643676921 +0000 UTC m=+958.768039125" watchObservedRunningTime="2025-12-02 14:44:56.645806024 +0000 UTC m=+958.770168228" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.707821 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr" podStartSLOduration=3.855486331 podStartE2EDuration="58.707806068s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:44:00.316843283 +0000 UTC m=+902.441205487" lastFinishedPulling="2025-12-02 14:44:55.16916302 +0000 UTC m=+957.293525224" observedRunningTime="2025-12-02 14:44:56.682341234 +0000 UTC m=+958.806703438" watchObservedRunningTime="2025-12-02 14:44:56.707806068 +0000 UTC m=+958.832168272" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.731712 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv" podStartSLOduration=2.792121283 podStartE2EDuration="58.731696533s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:44:00.298065759 +0000 UTC m=+902.422427963" lastFinishedPulling="2025-12-02 14:44:56.237641019 +0000 UTC m=+958.362003213" observedRunningTime="2025-12-02 14:44:56.726189876 +0000 UTC m=+958.850552080" watchObservedRunningTime="2025-12-02 14:44:56.731696533 +0000 UTC m=+958.856058737" Dec 02 14:44:56 crc kubenswrapper[4814]: I1202 14:44:56.732725 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj" podStartSLOduration=2.683418209 podStartE2EDuration="58.732719168s" podCreationTimestamp="2025-12-02 14:43:58 +0000 UTC" firstStartedPulling="2025-12-02 14:43:59.951679196 +0000 UTC m=+902.076041410" lastFinishedPulling="2025-12-02 14:44:56.000980155 +0000 UTC m=+958.125342369" observedRunningTime="2025-12-02 14:44:56.707056409 +0000 UTC m=+958.831418613" watchObservedRunningTime="2025-12-02 14:44:56.732719168 +0000 UTC m=+958.857081362" Dec 02 14:44:58 crc kubenswrapper[4814]: I1202 14:44:58.421628 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.182308 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv"] Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.183498 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.186208 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.186640 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.204427 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv"] Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.229854 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-config-volume\") pod \"collect-profiles-29411445-dq9jv\" (UID: \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.230004 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-secret-volume\") pod \"collect-profiles-29411445-dq9jv\" (UID: \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.230263 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zsz5\" (UniqueName: \"kubernetes.io/projected/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-kube-api-access-9zsz5\") pod \"collect-profiles-29411445-dq9jv\" (UID: \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.331397 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-config-volume\") pod \"collect-profiles-29411445-dq9jv\" (UID: \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.331455 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-secret-volume\") pod \"collect-profiles-29411445-dq9jv\" (UID: \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.331507 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zsz5\" (UniqueName: \"kubernetes.io/projected/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-kube-api-access-9zsz5\") pod \"collect-profiles-29411445-dq9jv\" (UID: \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.332625 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-config-volume\") pod \"collect-profiles-29411445-dq9jv\" (UID: \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.348311 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-secret-volume\") pod \"collect-profiles-29411445-dq9jv\" (UID: \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.358775 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zsz5\" (UniqueName: \"kubernetes.io/projected/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-kube-api-access-9zsz5\") pod \"collect-profiles-29411445-dq9jv\" (UID: \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.512815 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.519536 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-h9gcz" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.750756 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld" Dec 02 14:45:00 crc kubenswrapper[4814]: I1202 14:45:00.800860 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv"] Dec 02 14:45:01 crc kubenswrapper[4814]: I1202 14:45:01.188089 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-59c6849d55-jttnz" Dec 02 14:45:01 crc kubenswrapper[4814]: I1202 14:45:01.547423 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" event={"ID":"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6","Type":"ContainerStarted","Data":"6f060d2e49fd53b4f8b771940882ba93e9b7af9d3208e6320d6e47b4d7f4eda3"} Dec 02 14:45:01 crc kubenswrapper[4814]: I1202 14:45:01.837467 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:45:01 crc kubenswrapper[4814]: I1202 14:45:01.837762 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:45:01 crc kubenswrapper[4814]: I1202 14:45:01.888667 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:45:01 crc kubenswrapper[4814]: I1202 14:45:01.985378 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:45:01 crc kubenswrapper[4814]: I1202 14:45:01.985419 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:45:02 crc kubenswrapper[4814]: I1202 14:45:02.028650 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:45:02 crc kubenswrapper[4814]: I1202 14:45:02.554347 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" event={"ID":"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6","Type":"ContainerStarted","Data":"8a1fae10782fa141fd33364ef2d9b8972a6d381bcd8152966494fde1f91602f4"} Dec 02 14:45:02 crc kubenswrapper[4814]: I1202 14:45:02.569594 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" podStartSLOduration=2.569574961 podStartE2EDuration="2.569574961s" podCreationTimestamp="2025-12-02 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:45:02.567205962 +0000 UTC m=+964.691568176" watchObservedRunningTime="2025-12-02 14:45:02.569574961 +0000 UTC m=+964.693937165" Dec 02 14:45:02 crc kubenswrapper[4814]: I1202 14:45:02.614027 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:45:02 crc kubenswrapper[4814]: I1202 14:45:02.625867 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:45:03 crc kubenswrapper[4814]: I1202 14:45:03.563170 4814 generic.go:334] "Generic (PLEG): container finished" podID="0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6" containerID="8a1fae10782fa141fd33364ef2d9b8972a6d381bcd8152966494fde1f91602f4" exitCode=0 Dec 02 14:45:03 crc kubenswrapper[4814]: I1202 14:45:03.563301 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" event={"ID":"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6","Type":"ContainerDied","Data":"8a1fae10782fa141fd33364ef2d9b8972a6d381bcd8152966494fde1f91602f4"} Dec 02 14:45:03 crc kubenswrapper[4814]: I1202 14:45:03.722162 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgzmw"] Dec 02 14:45:04 crc kubenswrapper[4814]: I1202 14:45:04.570769 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fgzmw" podUID="d2364f74-72df-4afc-ab94-c119c821d5c8" containerName="registry-server" containerID="cri-o://ee2f250aeccc1c1432a52410ca07bda3b6a7cfd0cffe2837882c1070925cb29c" gracePeriod=2 Dec 02 14:45:04 crc kubenswrapper[4814]: I1202 14:45:04.719051 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5l8nn"] Dec 02 14:45:04 crc kubenswrapper[4814]: I1202 14:45:04.719431 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5l8nn" podUID="19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" containerName="registry-server" containerID="cri-o://cc838c571627c1c552509cf019ea81fb4e2f62c6c0a25a7480d739d09d54f054" gracePeriod=2 Dec 02 14:45:04 crc kubenswrapper[4814]: I1202 14:45:04.875467 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" Dec 02 14:45:04 crc kubenswrapper[4814]: I1202 14:45:04.923946 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-secret-volume\") pod \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\" (UID: \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\") " Dec 02 14:45:04 crc kubenswrapper[4814]: I1202 14:45:04.924012 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zsz5\" (UniqueName: \"kubernetes.io/projected/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-kube-api-access-9zsz5\") pod \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\" (UID: \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\") " Dec 02 14:45:04 crc kubenswrapper[4814]: I1202 14:45:04.924199 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-config-volume\") pod \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\" (UID: \"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6\") " Dec 02 14:45:04 crc kubenswrapper[4814]: I1202 14:45:04.924873 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-config-volume" (OuterVolumeSpecName: "config-volume") pod "0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6" (UID: "0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:45:04 crc kubenswrapper[4814]: I1202 14:45:04.925253 4814 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:04 crc kubenswrapper[4814]: I1202 14:45:04.933507 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-kube-api-access-9zsz5" (OuterVolumeSpecName: "kube-api-access-9zsz5") pod "0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6" (UID: "0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6"). InnerVolumeSpecName "kube-api-access-9zsz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:45:04 crc kubenswrapper[4814]: I1202 14:45:04.936315 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6" (UID: "0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.026904 4814 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.026945 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zsz5\" (UniqueName: \"kubernetes.io/projected/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6-kube-api-access-9zsz5\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.582443 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" event={"ID":"0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6","Type":"ContainerDied","Data":"6f060d2e49fd53b4f8b771940882ba93e9b7af9d3208e6320d6e47b4d7f4eda3"} Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.582482 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv" Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.582488 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f060d2e49fd53b4f8b771940882ba93e9b7af9d3208e6320d6e47b4d7f4eda3" Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.587128 4814 generic.go:334] "Generic (PLEG): container finished" podID="d2364f74-72df-4afc-ab94-c119c821d5c8" containerID="ee2f250aeccc1c1432a52410ca07bda3b6a7cfd0cffe2837882c1070925cb29c" exitCode=0 Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.587270 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgzmw" event={"ID":"d2364f74-72df-4afc-ab94-c119c821d5c8","Type":"ContainerDied","Data":"ee2f250aeccc1c1432a52410ca07bda3b6a7cfd0cffe2837882c1070925cb29c"} Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.589668 4814 generic.go:334] "Generic (PLEG): container finished" podID="19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" containerID="cc838c571627c1c552509cf019ea81fb4e2f62c6c0a25a7480d739d09d54f054" exitCode=0 Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.589706 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5l8nn" event={"ID":"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28","Type":"ContainerDied","Data":"cc838c571627c1c552509cf019ea81fb4e2f62c6c0a25a7480d739d09d54f054"} Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.643757 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.737312 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-catalog-content\") pod \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\" (UID: \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\") " Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.737388 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-utilities\") pod \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\" (UID: \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\") " Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.737486 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2rjv\" (UniqueName: \"kubernetes.io/projected/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-kube-api-access-z2rjv\") pod \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\" (UID: \"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28\") " Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.738618 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-utilities" (OuterVolumeSpecName: "utilities") pod "19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" (UID: "19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.749428 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-kube-api-access-z2rjv" (OuterVolumeSpecName: "kube-api-access-z2rjv") pod "19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" (UID: "19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28"). InnerVolumeSpecName "kube-api-access-z2rjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.818380 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" (UID: "19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.839908 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.839952 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:05 crc kubenswrapper[4814]: I1202 14:45:05.839966 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2rjv\" (UniqueName: \"kubernetes.io/projected/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28-kube-api-access-z2rjv\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.121615 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.143547 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8wbm\" (UniqueName: \"kubernetes.io/projected/d2364f74-72df-4afc-ab94-c119c821d5c8-kube-api-access-h8wbm\") pod \"d2364f74-72df-4afc-ab94-c119c821d5c8\" (UID: \"d2364f74-72df-4afc-ab94-c119c821d5c8\") " Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.143653 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2364f74-72df-4afc-ab94-c119c821d5c8-utilities\") pod \"d2364f74-72df-4afc-ab94-c119c821d5c8\" (UID: \"d2364f74-72df-4afc-ab94-c119c821d5c8\") " Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.143735 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2364f74-72df-4afc-ab94-c119c821d5c8-catalog-content\") pod \"d2364f74-72df-4afc-ab94-c119c821d5c8\" (UID: \"d2364f74-72df-4afc-ab94-c119c821d5c8\") " Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.144720 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2364f74-72df-4afc-ab94-c119c821d5c8-utilities" (OuterVolumeSpecName: "utilities") pod "d2364f74-72df-4afc-ab94-c119c821d5c8" (UID: "d2364f74-72df-4afc-ab94-c119c821d5c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.146832 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2364f74-72df-4afc-ab94-c119c821d5c8-kube-api-access-h8wbm" (OuterVolumeSpecName: "kube-api-access-h8wbm") pod "d2364f74-72df-4afc-ab94-c119c821d5c8" (UID: "d2364f74-72df-4afc-ab94-c119c821d5c8"). InnerVolumeSpecName "kube-api-access-h8wbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.187879 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2364f74-72df-4afc-ab94-c119c821d5c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2364f74-72df-4afc-ab94-c119c821d5c8" (UID: "d2364f74-72df-4afc-ab94-c119c821d5c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.244777 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8wbm\" (UniqueName: \"kubernetes.io/projected/d2364f74-72df-4afc-ab94-c119c821d5c8-kube-api-access-h8wbm\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.244811 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2364f74-72df-4afc-ab94-c119c821d5c8-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.244820 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2364f74-72df-4afc-ab94-c119c821d5c8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.600581 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5l8nn" event={"ID":"19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28","Type":"ContainerDied","Data":"742066b3d30e9258ec36b0a6fb1f426fd9a87124d65eff615082752ed52c9d15"} Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.600664 4814 scope.go:117] "RemoveContainer" containerID="cc838c571627c1c552509cf019ea81fb4e2f62c6c0a25a7480d739d09d54f054" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.600616 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5l8nn" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.606074 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgzmw" event={"ID":"d2364f74-72df-4afc-ab94-c119c821d5c8","Type":"ContainerDied","Data":"914541b6c80071bc2a9a9f2407d9ef958ce8f28e9357880d2ec0e0b4e485bd37"} Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.606178 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgzmw" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.627421 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5l8nn"] Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.633291 4814 scope.go:117] "RemoveContainer" containerID="2514f64af66e8ebb342004e053e935e0adea98bc5eb79870f2389fa410bc6bde" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.639385 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5l8nn"] Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.645727 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgzmw"] Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.651457 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgzmw"] Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.687975 4814 scope.go:117] "RemoveContainer" containerID="24a7c9ae6ef0af07c35f96154cc716afdecc1399bed5b8e666f40c93eeec5de6" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.711856 4814 scope.go:117] "RemoveContainer" containerID="ee2f250aeccc1c1432a52410ca07bda3b6a7cfd0cffe2837882c1070925cb29c" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.739319 4814 scope.go:117] "RemoveContainer" containerID="4a3893841a9f9e4ca74235f6ece7fcc48f652f63586b35861bf963b7ff0e8d22" Dec 02 14:45:06 crc kubenswrapper[4814]: I1202 14:45:06.759911 4814 scope.go:117] "RemoveContainer" containerID="f14215ef45dc4a04746e44c94252b2e4c2b6d59df07b7730603d3879876d18f7" Dec 02 14:45:08 crc kubenswrapper[4814]: I1202 14:45:08.161711 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" path="/var/lib/kubelet/pods/19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28/volumes" Dec 02 14:45:08 crc kubenswrapper[4814]: I1202 14:45:08.164148 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2364f74-72df-4afc-ab94-c119c821d5c8" path="/var/lib/kubelet/pods/d2364f74-72df-4afc-ab94-c119c821d5c8/volumes" Dec 02 14:45:08 crc kubenswrapper[4814]: I1202 14:45:08.376033 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-z92q9" Dec 02 14:45:08 crc kubenswrapper[4814]: I1202 14:45:08.424297 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-rgnvj" Dec 02 14:45:08 crc kubenswrapper[4814]: I1202 14:45:08.627126 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-7ttvw" Dec 02 14:45:08 crc kubenswrapper[4814]: I1202 14:45:08.791385 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-l5bdj" Dec 02 14:45:08 crc kubenswrapper[4814]: I1202 14:45:08.834677 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-ngzqv" Dec 02 14:45:08 crc kubenswrapper[4814]: I1202 14:45:08.919658 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zbs77" Dec 02 14:45:08 crc kubenswrapper[4814]: I1202 14:45:08.961263 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-q7sbm" Dec 02 14:45:09 crc kubenswrapper[4814]: I1202 14:45:09.379476 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-kxkhr" Dec 02 14:45:09 crc kubenswrapper[4814]: I1202 14:45:09.486857 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9nxkr" Dec 02 14:45:15 crc kubenswrapper[4814]: I1202 14:45:15.693133 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv"] Dec 02 14:45:15 crc kubenswrapper[4814]: I1202 14:45:15.693988 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" podUID="5e592dcb-a77d-4bf2-8600-7655ad2b6e81" containerName="operator" containerID="cri-o://a775ab05d3f01cc5c1c6858dd49543c4236bbdf7f4cbd303027aaf70f12bac69" gracePeriod=10 Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.359155 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.517382 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh9rz\" (UniqueName: \"kubernetes.io/projected/5e592dcb-a77d-4bf2-8600-7655ad2b6e81-kube-api-access-kh9rz\") pod \"5e592dcb-a77d-4bf2-8600-7655ad2b6e81\" (UID: \"5e592dcb-a77d-4bf2-8600-7655ad2b6e81\") " Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.525900 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e592dcb-a77d-4bf2-8600-7655ad2b6e81-kube-api-access-kh9rz" (OuterVolumeSpecName: "kube-api-access-kh9rz") pod "5e592dcb-a77d-4bf2-8600-7655ad2b6e81" (UID: "5e592dcb-a77d-4bf2-8600-7655ad2b6e81"). InnerVolumeSpecName "kube-api-access-kh9rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.618841 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh9rz\" (UniqueName: \"kubernetes.io/projected/5e592dcb-a77d-4bf2-8600-7655ad2b6e81-kube-api-access-kh9rz\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.698069 4814 generic.go:334] "Generic (PLEG): container finished" podID="5e592dcb-a77d-4bf2-8600-7655ad2b6e81" containerID="a775ab05d3f01cc5c1c6858dd49543c4236bbdf7f4cbd303027aaf70f12bac69" exitCode=0 Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.698115 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" event={"ID":"5e592dcb-a77d-4bf2-8600-7655ad2b6e81","Type":"ContainerDied","Data":"a775ab05d3f01cc5c1c6858dd49543c4236bbdf7f4cbd303027aaf70f12bac69"} Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.698146 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" event={"ID":"5e592dcb-a77d-4bf2-8600-7655ad2b6e81","Type":"ContainerDied","Data":"a580be4bd65312bea494248e73abd7590c4d3b2f46f0441bb2f56dbec48d7d56"} Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.698159 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv" Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.698167 4814 scope.go:117] "RemoveContainer" containerID="a775ab05d3f01cc5c1c6858dd49543c4236bbdf7f4cbd303027aaf70f12bac69" Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.719463 4814 scope.go:117] "RemoveContainer" containerID="a775ab05d3f01cc5c1c6858dd49543c4236bbdf7f4cbd303027aaf70f12bac69" Dec 02 14:45:17 crc kubenswrapper[4814]: E1202 14:45:17.719855 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a775ab05d3f01cc5c1c6858dd49543c4236bbdf7f4cbd303027aaf70f12bac69\": container with ID starting with a775ab05d3f01cc5c1c6858dd49543c4236bbdf7f4cbd303027aaf70f12bac69 not found: ID does not exist" containerID="a775ab05d3f01cc5c1c6858dd49543c4236bbdf7f4cbd303027aaf70f12bac69" Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.719891 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a775ab05d3f01cc5c1c6858dd49543c4236bbdf7f4cbd303027aaf70f12bac69"} err="failed to get container status \"a775ab05d3f01cc5c1c6858dd49543c4236bbdf7f4cbd303027aaf70f12bac69\": rpc error: code = NotFound desc = could not find container \"a775ab05d3f01cc5c1c6858dd49543c4236bbdf7f4cbd303027aaf70f12bac69\": container with ID starting with a775ab05d3f01cc5c1c6858dd49543c4236bbdf7f4cbd303027aaf70f12bac69 not found: ID does not exist" Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.743179 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv"] Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.751388 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-748bc85fdb-87vnv"] Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.931072 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6"] Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.931318 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" podUID="4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" containerName="manager" containerID="cri-o://5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc" gracePeriod=10 Dec 02 14:45:17 crc kubenswrapper[4814]: I1202 14:45:17.931447 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" podUID="4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" containerName="kube-rbac-proxy" containerID="cri-o://9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66" gracePeriod=10 Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.165920 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e592dcb-a77d-4bf2-8600-7655ad2b6e81" path="/var/lib/kubelet/pods/5e592dcb-a77d-4bf2-8600-7655ad2b6e81/volumes" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.384452 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.532563 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8t88\" (UniqueName: \"kubernetes.io/projected/4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad-kube-api-access-h8t88\") pod \"4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad\" (UID: \"4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad\") " Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.539052 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad-kube-api-access-h8t88" (OuterVolumeSpecName: "kube-api-access-h8t88") pod "4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" (UID: "4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad"). InnerVolumeSpecName "kube-api-access-h8t88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.634697 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8t88\" (UniqueName: \"kubernetes.io/projected/4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad-kube-api-access-h8t88\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.709813 4814 generic.go:334] "Generic (PLEG): container finished" podID="4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" containerID="9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66" exitCode=0 Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.709854 4814 generic.go:334] "Generic (PLEG): container finished" podID="4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" containerID="5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc" exitCode=0 Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.709890 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.709888 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" event={"ID":"4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad","Type":"ContainerDied","Data":"9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66"} Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.710095 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" event={"ID":"4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad","Type":"ContainerDied","Data":"5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc"} Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.710125 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6" event={"ID":"4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad","Type":"ContainerDied","Data":"df8bec736aa6848cba96695b2956fd3c344666166df3bec3ca247b699d9a5f59"} Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.710153 4814 scope.go:117] "RemoveContainer" containerID="9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.730765 4814 scope.go:117] "RemoveContainer" containerID="5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.756901 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6"] Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.758811 4814 scope.go:117] "RemoveContainer" containerID="9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66" Dec 02 14:45:18 crc kubenswrapper[4814]: E1202 14:45:18.759046 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66\": container with ID starting with 9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66 not found: ID does not exist" containerID="9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.759081 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66"} err="failed to get container status \"9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66\": rpc error: code = NotFound desc = could not find container \"9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66\": container with ID starting with 9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66 not found: ID does not exist" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.759108 4814 scope.go:117] "RemoveContainer" containerID="5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc" Dec 02 14:45:18 crc kubenswrapper[4814]: E1202 14:45:18.759342 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc\": container with ID starting with 5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc not found: ID does not exist" containerID="5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.759371 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc"} err="failed to get container status \"5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc\": rpc error: code = NotFound desc = could not find container \"5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc\": container with ID starting with 5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc not found: ID does not exist" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.759390 4814 scope.go:117] "RemoveContainer" containerID="9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.759546 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66"} err="failed to get container status \"9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66\": rpc error: code = NotFound desc = could not find container \"9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66\": container with ID starting with 9523611ab740011adfde0b2407da94ed6a0dcc7f2c2d09544bec3037c13a7b66 not found: ID does not exist" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.759572 4814 scope.go:117] "RemoveContainer" containerID="5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.760588 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc"} err="failed to get container status \"5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc\": rpc error: code = NotFound desc = could not find container \"5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc\": container with ID starting with 5c174e25c531400ac525b44965ac32299aa8e67048e75847896817b7e0b278cc not found: ID does not exist" Dec 02 14:45:18 crc kubenswrapper[4814]: I1202 14:45:18.764115 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-b76b9c9d7-l6nn6"] Dec 02 14:45:20 crc kubenswrapper[4814]: I1202 14:45:20.153185 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" path="/var/lib/kubelet/pods/4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad/volumes" Dec 02 14:45:20 crc kubenswrapper[4814]: I1202 14:45:20.651265 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:45:20 crc kubenswrapper[4814]: I1202 14:45:20.651316 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.226691 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-46tvt"] Dec 02 14:45:22 crc kubenswrapper[4814]: E1202 14:45:22.227403 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" containerName="extract-utilities" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227419 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" containerName="extract-utilities" Dec 02 14:45:22 crc kubenswrapper[4814]: E1202 14:45:22.227437 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6" containerName="collect-profiles" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227445 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6" containerName="collect-profiles" Dec 02 14:45:22 crc kubenswrapper[4814]: E1202 14:45:22.227465 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" containerName="registry-server" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227473 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" containerName="registry-server" Dec 02 14:45:22 crc kubenswrapper[4814]: E1202 14:45:22.227494 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" containerName="kube-rbac-proxy" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227501 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" containerName="kube-rbac-proxy" Dec 02 14:45:22 crc kubenswrapper[4814]: E1202 14:45:22.227528 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e592dcb-a77d-4bf2-8600-7655ad2b6e81" containerName="operator" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227537 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e592dcb-a77d-4bf2-8600-7655ad2b6e81" containerName="operator" Dec 02 14:45:22 crc kubenswrapper[4814]: E1202 14:45:22.227549 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2364f74-72df-4afc-ab94-c119c821d5c8" containerName="extract-utilities" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227557 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2364f74-72df-4afc-ab94-c119c821d5c8" containerName="extract-utilities" Dec 02 14:45:22 crc kubenswrapper[4814]: E1202 14:45:22.227567 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" containerName="manager" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227574 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" containerName="manager" Dec 02 14:45:22 crc kubenswrapper[4814]: E1202 14:45:22.227586 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2364f74-72df-4afc-ab94-c119c821d5c8" containerName="extract-content" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227593 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2364f74-72df-4afc-ab94-c119c821d5c8" containerName="extract-content" Dec 02 14:45:22 crc kubenswrapper[4814]: E1202 14:45:22.227603 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" containerName="extract-content" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227610 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" containerName="extract-content" Dec 02 14:45:22 crc kubenswrapper[4814]: E1202 14:45:22.227624 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2364f74-72df-4afc-ab94-c119c821d5c8" containerName="registry-server" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227631 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2364f74-72df-4afc-ab94-c119c821d5c8" containerName="registry-server" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227805 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="19aaaf80-2786-4bf7-97d9-3a8e2d2e1e28" containerName="registry-server" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227817 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" containerName="manager" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227835 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e592dcb-a77d-4bf2-8600-7655ad2b6e81" containerName="operator" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227846 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f85bd48-27d1-4e3a-8fb8-ee34af71a2ad" containerName="kube-rbac-proxy" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227865 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2364f74-72df-4afc-ab94-c119c821d5c8" containerName="registry-server" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.227876 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6" containerName="collect-profiles" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.228687 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-46tvt" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.242134 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-index-dockercfg-l8qqx" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.248128 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-46tvt"] Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.386524 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcv74\" (UniqueName: \"kubernetes.io/projected/27e2bf59-bb61-4f14-88af-07cf8e2bdbdb-kube-api-access-fcv74\") pod \"watcher-operator-index-46tvt\" (UID: \"27e2bf59-bb61-4f14-88af-07cf8e2bdbdb\") " pod="openstack-operators/watcher-operator-index-46tvt" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.487540 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcv74\" (UniqueName: \"kubernetes.io/projected/27e2bf59-bb61-4f14-88af-07cf8e2bdbdb-kube-api-access-fcv74\") pod \"watcher-operator-index-46tvt\" (UID: \"27e2bf59-bb61-4f14-88af-07cf8e2bdbdb\") " pod="openstack-operators/watcher-operator-index-46tvt" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.505131 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcv74\" (UniqueName: \"kubernetes.io/projected/27e2bf59-bb61-4f14-88af-07cf8e2bdbdb-kube-api-access-fcv74\") pod \"watcher-operator-index-46tvt\" (UID: \"27e2bf59-bb61-4f14-88af-07cf8e2bdbdb\") " pod="openstack-operators/watcher-operator-index-46tvt" Dec 02 14:45:22 crc kubenswrapper[4814]: I1202 14:45:22.562262 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-46tvt" Dec 02 14:45:23 crc kubenswrapper[4814]: I1202 14:45:23.022148 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-46tvt"] Dec 02 14:45:23 crc kubenswrapper[4814]: I1202 14:45:23.757924 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-46tvt" event={"ID":"27e2bf59-bb61-4f14-88af-07cf8e2bdbdb","Type":"ContainerStarted","Data":"8246572ebab04f58555cf55adcced0784a792b7cf44f49942710ae6fb80a0528"} Dec 02 14:45:23 crc kubenswrapper[4814]: I1202 14:45:23.758238 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-46tvt" event={"ID":"27e2bf59-bb61-4f14-88af-07cf8e2bdbdb","Type":"ContainerStarted","Data":"32f6e6580564ae3c0af9db30d747114aa962ff35a153c37029595c312b647099"} Dec 02 14:45:23 crc kubenswrapper[4814]: I1202 14:45:23.775857 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-46tvt" podStartSLOduration=1.5831132650000002 podStartE2EDuration="1.775835424s" podCreationTimestamp="2025-12-02 14:45:22 +0000 UTC" firstStartedPulling="2025-12-02 14:45:23.051881075 +0000 UTC m=+985.176243279" lastFinishedPulling="2025-12-02 14:45:23.244603214 +0000 UTC m=+985.368965438" observedRunningTime="2025-12-02 14:45:23.773984108 +0000 UTC m=+985.898346312" watchObservedRunningTime="2025-12-02 14:45:23.775835424 +0000 UTC m=+985.900197628" Dec 02 14:45:27 crc kubenswrapper[4814]: I1202 14:45:27.824933 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-46tvt"] Dec 02 14:45:27 crc kubenswrapper[4814]: I1202 14:45:27.825717 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-index-46tvt" podUID="27e2bf59-bb61-4f14-88af-07cf8e2bdbdb" containerName="registry-server" containerID="cri-o://8246572ebab04f58555cf55adcced0784a792b7cf44f49942710ae6fb80a0528" gracePeriod=2 Dec 02 14:45:28 crc kubenswrapper[4814]: I1202 14:45:28.629054 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-zst84"] Dec 02 14:45:28 crc kubenswrapper[4814]: I1202 14:45:28.631051 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-zst84" Dec 02 14:45:28 crc kubenswrapper[4814]: I1202 14:45:28.636358 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-zst84"] Dec 02 14:45:28 crc kubenswrapper[4814]: I1202 14:45:28.784866 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv2q8\" (UniqueName: \"kubernetes.io/projected/00b8428a-ae08-4c70-8eb7-566090aa6716-kube-api-access-hv2q8\") pod \"watcher-operator-index-zst84\" (UID: \"00b8428a-ae08-4c70-8eb7-566090aa6716\") " pod="openstack-operators/watcher-operator-index-zst84" Dec 02 14:45:28 crc kubenswrapper[4814]: I1202 14:45:28.806858 4814 generic.go:334] "Generic (PLEG): container finished" podID="27e2bf59-bb61-4f14-88af-07cf8e2bdbdb" containerID="8246572ebab04f58555cf55adcced0784a792b7cf44f49942710ae6fb80a0528" exitCode=0 Dec 02 14:45:28 crc kubenswrapper[4814]: I1202 14:45:28.806912 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-46tvt" event={"ID":"27e2bf59-bb61-4f14-88af-07cf8e2bdbdb","Type":"ContainerDied","Data":"8246572ebab04f58555cf55adcced0784a792b7cf44f49942710ae6fb80a0528"} Dec 02 14:45:28 crc kubenswrapper[4814]: I1202 14:45:28.806942 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-46tvt" event={"ID":"27e2bf59-bb61-4f14-88af-07cf8e2bdbdb","Type":"ContainerDied","Data":"32f6e6580564ae3c0af9db30d747114aa962ff35a153c37029595c312b647099"} Dec 02 14:45:28 crc kubenswrapper[4814]: I1202 14:45:28.806955 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32f6e6580564ae3c0af9db30d747114aa962ff35a153c37029595c312b647099" Dec 02 14:45:28 crc kubenswrapper[4814]: I1202 14:45:28.828018 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-46tvt" Dec 02 14:45:28 crc kubenswrapper[4814]: I1202 14:45:28.886275 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv2q8\" (UniqueName: \"kubernetes.io/projected/00b8428a-ae08-4c70-8eb7-566090aa6716-kube-api-access-hv2q8\") pod \"watcher-operator-index-zst84\" (UID: \"00b8428a-ae08-4c70-8eb7-566090aa6716\") " pod="openstack-operators/watcher-operator-index-zst84" Dec 02 14:45:28 crc kubenswrapper[4814]: I1202 14:45:28.903340 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv2q8\" (UniqueName: \"kubernetes.io/projected/00b8428a-ae08-4c70-8eb7-566090aa6716-kube-api-access-hv2q8\") pod \"watcher-operator-index-zst84\" (UID: \"00b8428a-ae08-4c70-8eb7-566090aa6716\") " pod="openstack-operators/watcher-operator-index-zst84" Dec 02 14:45:28 crc kubenswrapper[4814]: I1202 14:45:28.949552 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-zst84" Dec 02 14:45:28 crc kubenswrapper[4814]: I1202 14:45:28.987298 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcv74\" (UniqueName: \"kubernetes.io/projected/27e2bf59-bb61-4f14-88af-07cf8e2bdbdb-kube-api-access-fcv74\") pod \"27e2bf59-bb61-4f14-88af-07cf8e2bdbdb\" (UID: \"27e2bf59-bb61-4f14-88af-07cf8e2bdbdb\") " Dec 02 14:45:28 crc kubenswrapper[4814]: I1202 14:45:28.991403 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e2bf59-bb61-4f14-88af-07cf8e2bdbdb-kube-api-access-fcv74" (OuterVolumeSpecName: "kube-api-access-fcv74") pod "27e2bf59-bb61-4f14-88af-07cf8e2bdbdb" (UID: "27e2bf59-bb61-4f14-88af-07cf8e2bdbdb"). InnerVolumeSpecName "kube-api-access-fcv74". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:45:29 crc kubenswrapper[4814]: I1202 14:45:29.088786 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcv74\" (UniqueName: \"kubernetes.io/projected/27e2bf59-bb61-4f14-88af-07cf8e2bdbdb-kube-api-access-fcv74\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:29 crc kubenswrapper[4814]: I1202 14:45:29.393056 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-zst84"] Dec 02 14:45:29 crc kubenswrapper[4814]: I1202 14:45:29.817146 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-46tvt" Dec 02 14:45:29 crc kubenswrapper[4814]: I1202 14:45:29.817206 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-zst84" event={"ID":"00b8428a-ae08-4c70-8eb7-566090aa6716","Type":"ContainerStarted","Data":"660c9106c3be5de39137012ad85c9188b9fce996c2a29a6f342fbac7ae1495c1"} Dec 02 14:45:29 crc kubenswrapper[4814]: I1202 14:45:29.817323 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-zst84" event={"ID":"00b8428a-ae08-4c70-8eb7-566090aa6716","Type":"ContainerStarted","Data":"3112f096c188534a2cc76b49216d80069621dd6cc9b187e252ed6b097416c600"} Dec 02 14:45:29 crc kubenswrapper[4814]: I1202 14:45:29.837473 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-zst84" podStartSLOduration=1.778467784 podStartE2EDuration="1.837453604s" podCreationTimestamp="2025-12-02 14:45:28 +0000 UTC" firstStartedPulling="2025-12-02 14:45:29.3996586 +0000 UTC m=+991.524020804" lastFinishedPulling="2025-12-02 14:45:29.45864442 +0000 UTC m=+991.583006624" observedRunningTime="2025-12-02 14:45:29.831994588 +0000 UTC m=+991.956356792" watchObservedRunningTime="2025-12-02 14:45:29.837453604 +0000 UTC m=+991.961815828" Dec 02 14:45:29 crc kubenswrapper[4814]: I1202 14:45:29.862728 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-46tvt"] Dec 02 14:45:29 crc kubenswrapper[4814]: I1202 14:45:29.868072 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-index-46tvt"] Dec 02 14:45:30 crc kubenswrapper[4814]: I1202 14:45:30.154413 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e2bf59-bb61-4f14-88af-07cf8e2bdbdb" path="/var/lib/kubelet/pods/27e2bf59-bb61-4f14-88af-07cf8e2bdbdb/volumes" Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.443047 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jq7nw"] Dec 02 14:45:31 crc kubenswrapper[4814]: E1202 14:45:31.443703 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e2bf59-bb61-4f14-88af-07cf8e2bdbdb" containerName="registry-server" Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.443720 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e2bf59-bb61-4f14-88af-07cf8e2bdbdb" containerName="registry-server" Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.443907 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e2bf59-bb61-4f14-88af-07cf8e2bdbdb" containerName="registry-server" Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.445201 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.475212 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jq7nw"] Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.521958 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-utilities\") pod \"certified-operators-jq7nw\" (UID: \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\") " pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.522026 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqtjr\" (UniqueName: \"kubernetes.io/projected/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-kube-api-access-pqtjr\") pod \"certified-operators-jq7nw\" (UID: \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\") " pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.522074 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-catalog-content\") pod \"certified-operators-jq7nw\" (UID: \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\") " pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.623061 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-catalog-content\") pod \"certified-operators-jq7nw\" (UID: \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\") " pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.623159 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-utilities\") pod \"certified-operators-jq7nw\" (UID: \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\") " pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.623197 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqtjr\" (UniqueName: \"kubernetes.io/projected/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-kube-api-access-pqtjr\") pod \"certified-operators-jq7nw\" (UID: \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\") " pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.623615 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-catalog-content\") pod \"certified-operators-jq7nw\" (UID: \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\") " pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.623728 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-utilities\") pod \"certified-operators-jq7nw\" (UID: \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\") " pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.647167 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqtjr\" (UniqueName: \"kubernetes.io/projected/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-kube-api-access-pqtjr\") pod \"certified-operators-jq7nw\" (UID: \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\") " pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:31 crc kubenswrapper[4814]: I1202 14:45:31.760077 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:32 crc kubenswrapper[4814]: I1202 14:45:32.227486 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jq7nw"] Dec 02 14:45:32 crc kubenswrapper[4814]: W1202 14:45:32.241939 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8d4179d_08e4_4c4b_b009_5d0bdb4351a6.slice/crio-84dffacb48c82bf4291babc4b1abfef1e3c943c03d21fa91525253f63bc4121d WatchSource:0}: Error finding container 84dffacb48c82bf4291babc4b1abfef1e3c943c03d21fa91525253f63bc4121d: Status 404 returned error can't find the container with id 84dffacb48c82bf4291babc4b1abfef1e3c943c03d21fa91525253f63bc4121d Dec 02 14:45:32 crc kubenswrapper[4814]: I1202 14:45:32.842598 4814 generic.go:334] "Generic (PLEG): container finished" podID="b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" containerID="afa47dce6685b6000fd206708d15e3ba05e0b834e8d00197ccb43a38ac479fa4" exitCode=0 Dec 02 14:45:32 crc kubenswrapper[4814]: I1202 14:45:32.842645 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jq7nw" event={"ID":"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6","Type":"ContainerDied","Data":"afa47dce6685b6000fd206708d15e3ba05e0b834e8d00197ccb43a38ac479fa4"} Dec 02 14:45:32 crc kubenswrapper[4814]: I1202 14:45:32.842873 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jq7nw" event={"ID":"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6","Type":"ContainerStarted","Data":"84dffacb48c82bf4291babc4b1abfef1e3c943c03d21fa91525253f63bc4121d"} Dec 02 14:45:34 crc kubenswrapper[4814]: I1202 14:45:34.864311 4814 generic.go:334] "Generic (PLEG): container finished" podID="b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" containerID="790029197e89457a615296dd353e7c7459f090fb29dc2ff47853389465badc85" exitCode=0 Dec 02 14:45:34 crc kubenswrapper[4814]: I1202 14:45:34.864534 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jq7nw" event={"ID":"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6","Type":"ContainerDied","Data":"790029197e89457a615296dd353e7c7459f090fb29dc2ff47853389465badc85"} Dec 02 14:45:35 crc kubenswrapper[4814]: I1202 14:45:35.876942 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jq7nw" event={"ID":"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6","Type":"ContainerStarted","Data":"96daf68ee56aec7330453649cf682ac91e37f5207a56cb19c4d6125c48c00795"} Dec 02 14:45:35 crc kubenswrapper[4814]: I1202 14:45:35.915402 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jq7nw" podStartSLOduration=2.507802021 podStartE2EDuration="4.915376449s" podCreationTimestamp="2025-12-02 14:45:31 +0000 UTC" firstStartedPulling="2025-12-02 14:45:32.843994559 +0000 UTC m=+994.968356773" lastFinishedPulling="2025-12-02 14:45:35.251568957 +0000 UTC m=+997.375931201" observedRunningTime="2025-12-02 14:45:35.908265812 +0000 UTC m=+998.032628026" watchObservedRunningTime="2025-12-02 14:45:35.915376449 +0000 UTC m=+998.039738663" Dec 02 14:45:38 crc kubenswrapper[4814]: I1202 14:45:38.950769 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/watcher-operator-index-zst84" Dec 02 14:45:38 crc kubenswrapper[4814]: I1202 14:45:38.951006 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-index-zst84" Dec 02 14:45:38 crc kubenswrapper[4814]: I1202 14:45:38.980303 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/watcher-operator-index-zst84" Dec 02 14:45:39 crc kubenswrapper[4814]: I1202 14:45:39.937892 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-index-zst84" Dec 02 14:45:41 crc kubenswrapper[4814]: I1202 14:45:41.760413 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:41 crc kubenswrapper[4814]: I1202 14:45:41.761448 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:41 crc kubenswrapper[4814]: I1202 14:45:41.813335 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:41 crc kubenswrapper[4814]: I1202 14:45:41.983000 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:42 crc kubenswrapper[4814]: I1202 14:45:42.684982 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl"] Dec 02 14:45:42 crc kubenswrapper[4814]: I1202 14:45:42.687648 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" Dec 02 14:45:42 crc kubenswrapper[4814]: I1202 14:45:42.689708 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-8zw7q" Dec 02 14:45:42 crc kubenswrapper[4814]: I1202 14:45:42.693165 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl"] Dec 02 14:45:42 crc kubenswrapper[4814]: I1202 14:45:42.792273 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ce42437-6d96-4856-ac49-711a0959ba93-bundle\") pod \"f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl\" (UID: \"9ce42437-6d96-4856-ac49-711a0959ba93\") " pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" Dec 02 14:45:42 crc kubenswrapper[4814]: I1202 14:45:42.792443 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlkk4\" (UniqueName: \"kubernetes.io/projected/9ce42437-6d96-4856-ac49-711a0959ba93-kube-api-access-tlkk4\") pod \"f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl\" (UID: \"9ce42437-6d96-4856-ac49-711a0959ba93\") " pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" Dec 02 14:45:42 crc kubenswrapper[4814]: I1202 14:45:42.792513 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ce42437-6d96-4856-ac49-711a0959ba93-util\") pod \"f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl\" (UID: \"9ce42437-6d96-4856-ac49-711a0959ba93\") " pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" Dec 02 14:45:42 crc kubenswrapper[4814]: I1202 14:45:42.894430 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ce42437-6d96-4856-ac49-711a0959ba93-bundle\") pod \"f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl\" (UID: \"9ce42437-6d96-4856-ac49-711a0959ba93\") " pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" Dec 02 14:45:42 crc kubenswrapper[4814]: I1202 14:45:42.894521 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlkk4\" (UniqueName: \"kubernetes.io/projected/9ce42437-6d96-4856-ac49-711a0959ba93-kube-api-access-tlkk4\") pod \"f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl\" (UID: \"9ce42437-6d96-4856-ac49-711a0959ba93\") " pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" Dec 02 14:45:42 crc kubenswrapper[4814]: I1202 14:45:42.894546 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ce42437-6d96-4856-ac49-711a0959ba93-util\") pod \"f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl\" (UID: \"9ce42437-6d96-4856-ac49-711a0959ba93\") " pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" Dec 02 14:45:42 crc kubenswrapper[4814]: I1202 14:45:42.895023 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ce42437-6d96-4856-ac49-711a0959ba93-bundle\") pod \"f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl\" (UID: \"9ce42437-6d96-4856-ac49-711a0959ba93\") " pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" Dec 02 14:45:42 crc kubenswrapper[4814]: I1202 14:45:42.895027 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ce42437-6d96-4856-ac49-711a0959ba93-util\") pod \"f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl\" (UID: \"9ce42437-6d96-4856-ac49-711a0959ba93\") " pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" Dec 02 14:45:42 crc kubenswrapper[4814]: I1202 14:45:42.931035 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlkk4\" (UniqueName: \"kubernetes.io/projected/9ce42437-6d96-4856-ac49-711a0959ba93-kube-api-access-tlkk4\") pod \"f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl\" (UID: \"9ce42437-6d96-4856-ac49-711a0959ba93\") " pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" Dec 02 14:45:43 crc kubenswrapper[4814]: I1202 14:45:43.016675 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" Dec 02 14:45:43 crc kubenswrapper[4814]: I1202 14:45:43.498578 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl"] Dec 02 14:45:43 crc kubenswrapper[4814]: I1202 14:45:43.945184 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" event={"ID":"9ce42437-6d96-4856-ac49-711a0959ba93","Type":"ContainerStarted","Data":"f1f9d5fbec49104f30f2fdb7c8a205f1c824afb33b3f1cf9ee961f356aaf3f5b"} Dec 02 14:45:44 crc kubenswrapper[4814]: I1202 14:45:44.221736 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jq7nw"] Dec 02 14:45:44 crc kubenswrapper[4814]: I1202 14:45:44.953702 4814 generic.go:334] "Generic (PLEG): container finished" podID="9ce42437-6d96-4856-ac49-711a0959ba93" containerID="8fc611c6e5a68979650a1594d8f88ec7dc9457db7c46b363c26517bcdb2d0524" exitCode=0 Dec 02 14:45:44 crc kubenswrapper[4814]: I1202 14:45:44.953797 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" event={"ID":"9ce42437-6d96-4856-ac49-711a0959ba93","Type":"ContainerDied","Data":"8fc611c6e5a68979650a1594d8f88ec7dc9457db7c46b363c26517bcdb2d0524"} Dec 02 14:45:44 crc kubenswrapper[4814]: I1202 14:45:44.954004 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jq7nw" podUID="b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" containerName="registry-server" containerID="cri-o://96daf68ee56aec7330453649cf682ac91e37f5207a56cb19c4d6125c48c00795" gracePeriod=2 Dec 02 14:45:45 crc kubenswrapper[4814]: I1202 14:45:45.970994 4814 generic.go:334] "Generic (PLEG): container finished" podID="b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" containerID="96daf68ee56aec7330453649cf682ac91e37f5207a56cb19c4d6125c48c00795" exitCode=0 Dec 02 14:45:45 crc kubenswrapper[4814]: I1202 14:45:45.971064 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jq7nw" event={"ID":"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6","Type":"ContainerDied","Data":"96daf68ee56aec7330453649cf682ac91e37f5207a56cb19c4d6125c48c00795"} Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.080173 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.143408 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqtjr\" (UniqueName: \"kubernetes.io/projected/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-kube-api-access-pqtjr\") pod \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\" (UID: \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\") " Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.143527 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-utilities\") pod \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\" (UID: \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\") " Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.143629 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-catalog-content\") pod \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\" (UID: \"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6\") " Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.144536 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-utilities" (OuterVolumeSpecName: "utilities") pod "b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" (UID: "b8d4179d-08e4-4c4b-b009-5d0bdb4351a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.149379 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-kube-api-access-pqtjr" (OuterVolumeSpecName: "kube-api-access-pqtjr") pod "b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" (UID: "b8d4179d-08e4-4c4b-b009-5d0bdb4351a6"). InnerVolumeSpecName "kube-api-access-pqtjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.193668 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" (UID: "b8d4179d-08e4-4c4b-b009-5d0bdb4351a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.244875 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqtjr\" (UniqueName: \"kubernetes.io/projected/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-kube-api-access-pqtjr\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.244903 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.244913 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.978918 4814 generic.go:334] "Generic (PLEG): container finished" podID="9ce42437-6d96-4856-ac49-711a0959ba93" containerID="236ede8498a93fbdd898305eb868a82484435f01b1af1651a30fd962d91defff" exitCode=0 Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.979976 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" event={"ID":"9ce42437-6d96-4856-ac49-711a0959ba93","Type":"ContainerDied","Data":"236ede8498a93fbdd898305eb868a82484435f01b1af1651a30fd962d91defff"} Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.981490 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jq7nw" event={"ID":"b8d4179d-08e4-4c4b-b009-5d0bdb4351a6","Type":"ContainerDied","Data":"84dffacb48c82bf4291babc4b1abfef1e3c943c03d21fa91525253f63bc4121d"} Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.981522 4814 scope.go:117] "RemoveContainer" containerID="96daf68ee56aec7330453649cf682ac91e37f5207a56cb19c4d6125c48c00795" Dec 02 14:45:46 crc kubenswrapper[4814]: I1202 14:45:46.981706 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jq7nw" Dec 02 14:45:47 crc kubenswrapper[4814]: I1202 14:45:47.009176 4814 scope.go:117] "RemoveContainer" containerID="790029197e89457a615296dd353e7c7459f090fb29dc2ff47853389465badc85" Dec 02 14:45:47 crc kubenswrapper[4814]: I1202 14:45:47.028997 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jq7nw"] Dec 02 14:45:47 crc kubenswrapper[4814]: I1202 14:45:47.038082 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jq7nw"] Dec 02 14:45:47 crc kubenswrapper[4814]: I1202 14:45:47.040641 4814 scope.go:117] "RemoveContainer" containerID="afa47dce6685b6000fd206708d15e3ba05e0b834e8d00197ccb43a38ac479fa4" Dec 02 14:45:47 crc kubenswrapper[4814]: I1202 14:45:47.995334 4814 generic.go:334] "Generic (PLEG): container finished" podID="9ce42437-6d96-4856-ac49-711a0959ba93" containerID="617db507f46465eb9d231741ed3f3bf5eef4c59b65a17970ead653758e527ce6" exitCode=0 Dec 02 14:45:47 crc kubenswrapper[4814]: I1202 14:45:47.995464 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" event={"ID":"9ce42437-6d96-4856-ac49-711a0959ba93","Type":"ContainerDied","Data":"617db507f46465eb9d231741ed3f3bf5eef4c59b65a17970ead653758e527ce6"} Dec 02 14:45:48 crc kubenswrapper[4814]: I1202 14:45:48.157947 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" path="/var/lib/kubelet/pods/b8d4179d-08e4-4c4b-b009-5d0bdb4351a6/volumes" Dec 02 14:45:49 crc kubenswrapper[4814]: I1202 14:45:49.329218 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" Dec 02 14:45:49 crc kubenswrapper[4814]: I1202 14:45:49.388680 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ce42437-6d96-4856-ac49-711a0959ba93-bundle\") pod \"9ce42437-6d96-4856-ac49-711a0959ba93\" (UID: \"9ce42437-6d96-4856-ac49-711a0959ba93\") " Dec 02 14:45:49 crc kubenswrapper[4814]: I1202 14:45:49.388764 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlkk4\" (UniqueName: \"kubernetes.io/projected/9ce42437-6d96-4856-ac49-711a0959ba93-kube-api-access-tlkk4\") pod \"9ce42437-6d96-4856-ac49-711a0959ba93\" (UID: \"9ce42437-6d96-4856-ac49-711a0959ba93\") " Dec 02 14:45:49 crc kubenswrapper[4814]: I1202 14:45:49.388864 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ce42437-6d96-4856-ac49-711a0959ba93-util\") pod \"9ce42437-6d96-4856-ac49-711a0959ba93\" (UID: \"9ce42437-6d96-4856-ac49-711a0959ba93\") " Dec 02 14:45:49 crc kubenswrapper[4814]: I1202 14:45:49.390076 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ce42437-6d96-4856-ac49-711a0959ba93-bundle" (OuterVolumeSpecName: "bundle") pod "9ce42437-6d96-4856-ac49-711a0959ba93" (UID: "9ce42437-6d96-4856-ac49-711a0959ba93"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:45:49 crc kubenswrapper[4814]: I1202 14:45:49.395882 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce42437-6d96-4856-ac49-711a0959ba93-kube-api-access-tlkk4" (OuterVolumeSpecName: "kube-api-access-tlkk4") pod "9ce42437-6d96-4856-ac49-711a0959ba93" (UID: "9ce42437-6d96-4856-ac49-711a0959ba93"). InnerVolumeSpecName "kube-api-access-tlkk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:45:49 crc kubenswrapper[4814]: I1202 14:45:49.413115 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ce42437-6d96-4856-ac49-711a0959ba93-util" (OuterVolumeSpecName: "util") pod "9ce42437-6d96-4856-ac49-711a0959ba93" (UID: "9ce42437-6d96-4856-ac49-711a0959ba93"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:45:49 crc kubenswrapper[4814]: I1202 14:45:49.490947 4814 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ce42437-6d96-4856-ac49-711a0959ba93-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:49 crc kubenswrapper[4814]: I1202 14:45:49.490981 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlkk4\" (UniqueName: \"kubernetes.io/projected/9ce42437-6d96-4856-ac49-711a0959ba93-kube-api-access-tlkk4\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:49 crc kubenswrapper[4814]: I1202 14:45:49.490994 4814 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ce42437-6d96-4856-ac49-711a0959ba93-util\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:50 crc kubenswrapper[4814]: I1202 14:45:50.019378 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" event={"ID":"9ce42437-6d96-4856-ac49-711a0959ba93","Type":"ContainerDied","Data":"f1f9d5fbec49104f30f2fdb7c8a205f1c824afb33b3f1cf9ee961f356aaf3f5b"} Dec 02 14:45:50 crc kubenswrapper[4814]: I1202 14:45:50.019432 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1f9d5fbec49104f30f2fdb7c8a205f1c824afb33b3f1cf9ee961f356aaf3f5b" Dec 02 14:45:50 crc kubenswrapper[4814]: I1202 14:45:50.019473 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl" Dec 02 14:45:50 crc kubenswrapper[4814]: I1202 14:45:50.651683 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:45:50 crc kubenswrapper[4814]: I1202 14:45:50.651739 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:45:50 crc kubenswrapper[4814]: I1202 14:45:50.651781 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:45:50 crc kubenswrapper[4814]: I1202 14:45:50.652392 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"997855c695d121643798ebc9fbdd2b1a6b7bdb08460863ea230c98ff92c4714f"} pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:45:50 crc kubenswrapper[4814]: I1202 14:45:50.652456 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" containerID="cri-o://997855c695d121643798ebc9fbdd2b1a6b7bdb08460863ea230c98ff92c4714f" gracePeriod=600 Dec 02 14:45:51 crc kubenswrapper[4814]: I1202 14:45:51.030062 4814 generic.go:334] "Generic (PLEG): container finished" podID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerID="997855c695d121643798ebc9fbdd2b1a6b7bdb08460863ea230c98ff92c4714f" exitCode=0 Dec 02 14:45:51 crc kubenswrapper[4814]: I1202 14:45:51.030573 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerDied","Data":"997855c695d121643798ebc9fbdd2b1a6b7bdb08460863ea230c98ff92c4714f"} Dec 02 14:45:51 crc kubenswrapper[4814]: I1202 14:45:51.030603 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"1a790dc6cfa4af470ca8102d397fb8348715084a7eb2e984ef5303da30379ee9"} Dec 02 14:45:51 crc kubenswrapper[4814]: I1202 14:45:51.030634 4814 scope.go:117] "RemoveContainer" containerID="a1ed9f1bb911b9ac82f3b2b6f82d455cb053025789269a61268761d5183751b3" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.527684 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c"] Dec 02 14:45:54 crc kubenswrapper[4814]: E1202 14:45:54.528542 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce42437-6d96-4856-ac49-711a0959ba93" containerName="extract" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.528556 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce42437-6d96-4856-ac49-711a0959ba93" containerName="extract" Dec 02 14:45:54 crc kubenswrapper[4814]: E1202 14:45:54.528568 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" containerName="extract-utilities" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.528574 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" containerName="extract-utilities" Dec 02 14:45:54 crc kubenswrapper[4814]: E1202 14:45:54.528584 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" containerName="registry-server" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.528592 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" containerName="registry-server" Dec 02 14:45:54 crc kubenswrapper[4814]: E1202 14:45:54.528602 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" containerName="extract-content" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.528607 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" containerName="extract-content" Dec 02 14:45:54 crc kubenswrapper[4814]: E1202 14:45:54.528628 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce42437-6d96-4856-ac49-711a0959ba93" containerName="pull" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.528634 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce42437-6d96-4856-ac49-711a0959ba93" containerName="pull" Dec 02 14:45:54 crc kubenswrapper[4814]: E1202 14:45:54.528648 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce42437-6d96-4856-ac49-711a0959ba93" containerName="util" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.528654 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce42437-6d96-4856-ac49-711a0959ba93" containerName="util" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.528791 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8d4179d-08e4-4c4b-b009-5d0bdb4351a6" containerName="registry-server" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.528800 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce42437-6d96-4856-ac49-711a0959ba93" containerName="extract" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.529255 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.531988 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-ts267" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.532113 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-service-cert" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.541404 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c"] Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.562183 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ecbecca6-4535-4f20-bc81-6a57a77c66cc-apiservice-cert\") pod \"watcher-operator-controller-manager-58b8584d74-mgj2c\" (UID: \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\") " pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.562287 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ecbecca6-4535-4f20-bc81-6a57a77c66cc-webhook-cert\") pod \"watcher-operator-controller-manager-58b8584d74-mgj2c\" (UID: \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\") " pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.562323 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfjbg\" (UniqueName: \"kubernetes.io/projected/ecbecca6-4535-4f20-bc81-6a57a77c66cc-kube-api-access-rfjbg\") pod \"watcher-operator-controller-manager-58b8584d74-mgj2c\" (UID: \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\") " pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.663856 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ecbecca6-4535-4f20-bc81-6a57a77c66cc-webhook-cert\") pod \"watcher-operator-controller-manager-58b8584d74-mgj2c\" (UID: \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\") " pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.663933 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfjbg\" (UniqueName: \"kubernetes.io/projected/ecbecca6-4535-4f20-bc81-6a57a77c66cc-kube-api-access-rfjbg\") pod \"watcher-operator-controller-manager-58b8584d74-mgj2c\" (UID: \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\") " pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.664012 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ecbecca6-4535-4f20-bc81-6a57a77c66cc-apiservice-cert\") pod \"watcher-operator-controller-manager-58b8584d74-mgj2c\" (UID: \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\") " pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.669539 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ecbecca6-4535-4f20-bc81-6a57a77c66cc-apiservice-cert\") pod \"watcher-operator-controller-manager-58b8584d74-mgj2c\" (UID: \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\") " pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.670748 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ecbecca6-4535-4f20-bc81-6a57a77c66cc-webhook-cert\") pod \"watcher-operator-controller-manager-58b8584d74-mgj2c\" (UID: \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\") " pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.684004 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfjbg\" (UniqueName: \"kubernetes.io/projected/ecbecca6-4535-4f20-bc81-6a57a77c66cc-kube-api-access-rfjbg\") pod \"watcher-operator-controller-manager-58b8584d74-mgj2c\" (UID: \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\") " pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:45:54 crc kubenswrapper[4814]: I1202 14:45:54.846204 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:45:55 crc kubenswrapper[4814]: I1202 14:45:55.363580 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c"] Dec 02 14:45:55 crc kubenswrapper[4814]: W1202 14:45:55.369042 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecbecca6_4535_4f20_bc81_6a57a77c66cc.slice/crio-057f42a97ef9e62c489d568d948b7203319a4bb620964bce06175735c83fefed WatchSource:0}: Error finding container 057f42a97ef9e62c489d568d948b7203319a4bb620964bce06175735c83fefed: Status 404 returned error can't find the container with id 057f42a97ef9e62c489d568d948b7203319a4bb620964bce06175735c83fefed Dec 02 14:45:56 crc kubenswrapper[4814]: I1202 14:45:56.077446 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" event={"ID":"ecbecca6-4535-4f20-bc81-6a57a77c66cc","Type":"ContainerStarted","Data":"30a5a8699355741e413f7570defbd95ab5c3198eab270525adebdd21114613a9"} Dec 02 14:45:56 crc kubenswrapper[4814]: I1202 14:45:56.077750 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" event={"ID":"ecbecca6-4535-4f20-bc81-6a57a77c66cc","Type":"ContainerStarted","Data":"057f42a97ef9e62c489d568d948b7203319a4bb620964bce06175735c83fefed"} Dec 02 14:45:56 crc kubenswrapper[4814]: I1202 14:45:56.077766 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:45:56 crc kubenswrapper[4814]: I1202 14:45:56.116401 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" podStartSLOduration=2.116381158 podStartE2EDuration="2.116381158s" podCreationTimestamp="2025-12-02 14:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:45:56.112628415 +0000 UTC m=+1018.236990619" watchObservedRunningTime="2025-12-02 14:45:56.116381158 +0000 UTC m=+1018.240743372" Dec 02 14:46:04 crc kubenswrapper[4814]: I1202 14:46:04.851666 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:46:07 crc kubenswrapper[4814]: I1202 14:46:07.714045 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd"] Dec 02 14:46:07 crc kubenswrapper[4814]: I1202 14:46:07.715272 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" Dec 02 14:46:07 crc kubenswrapper[4814]: I1202 14:46:07.741128 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd"] Dec 02 14:46:07 crc kubenswrapper[4814]: I1202 14:46:07.749110 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cabeccf-6592-4e14-b15d-bc876d5061f4-webhook-cert\") pod \"watcher-operator-controller-manager-987c4dd49-4xbpd\" (UID: \"0cabeccf-6592-4e14-b15d-bc876d5061f4\") " pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" Dec 02 14:46:07 crc kubenswrapper[4814]: I1202 14:46:07.749192 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mr2q\" (UniqueName: \"kubernetes.io/projected/0cabeccf-6592-4e14-b15d-bc876d5061f4-kube-api-access-9mr2q\") pod \"watcher-operator-controller-manager-987c4dd49-4xbpd\" (UID: \"0cabeccf-6592-4e14-b15d-bc876d5061f4\") " pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" Dec 02 14:46:07 crc kubenswrapper[4814]: I1202 14:46:07.749432 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cabeccf-6592-4e14-b15d-bc876d5061f4-apiservice-cert\") pod \"watcher-operator-controller-manager-987c4dd49-4xbpd\" (UID: \"0cabeccf-6592-4e14-b15d-bc876d5061f4\") " pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" Dec 02 14:46:07 crc kubenswrapper[4814]: I1202 14:46:07.850808 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mr2q\" (UniqueName: \"kubernetes.io/projected/0cabeccf-6592-4e14-b15d-bc876d5061f4-kube-api-access-9mr2q\") pod \"watcher-operator-controller-manager-987c4dd49-4xbpd\" (UID: \"0cabeccf-6592-4e14-b15d-bc876d5061f4\") " pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" Dec 02 14:46:07 crc kubenswrapper[4814]: I1202 14:46:07.850926 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cabeccf-6592-4e14-b15d-bc876d5061f4-apiservice-cert\") pod \"watcher-operator-controller-manager-987c4dd49-4xbpd\" (UID: \"0cabeccf-6592-4e14-b15d-bc876d5061f4\") " pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" Dec 02 14:46:07 crc kubenswrapper[4814]: I1202 14:46:07.850967 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cabeccf-6592-4e14-b15d-bc876d5061f4-webhook-cert\") pod \"watcher-operator-controller-manager-987c4dd49-4xbpd\" (UID: \"0cabeccf-6592-4e14-b15d-bc876d5061f4\") " pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" Dec 02 14:46:07 crc kubenswrapper[4814]: I1202 14:46:07.856871 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cabeccf-6592-4e14-b15d-bc876d5061f4-apiservice-cert\") pod \"watcher-operator-controller-manager-987c4dd49-4xbpd\" (UID: \"0cabeccf-6592-4e14-b15d-bc876d5061f4\") " pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" Dec 02 14:46:07 crc kubenswrapper[4814]: I1202 14:46:07.870433 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cabeccf-6592-4e14-b15d-bc876d5061f4-webhook-cert\") pod \"watcher-operator-controller-manager-987c4dd49-4xbpd\" (UID: \"0cabeccf-6592-4e14-b15d-bc876d5061f4\") " pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" Dec 02 14:46:07 crc kubenswrapper[4814]: I1202 14:46:07.918275 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mr2q\" (UniqueName: \"kubernetes.io/projected/0cabeccf-6592-4e14-b15d-bc876d5061f4-kube-api-access-9mr2q\") pod \"watcher-operator-controller-manager-987c4dd49-4xbpd\" (UID: \"0cabeccf-6592-4e14-b15d-bc876d5061f4\") " pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" Dec 02 14:46:08 crc kubenswrapper[4814]: I1202 14:46:08.033677 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" Dec 02 14:46:08 crc kubenswrapper[4814]: I1202 14:46:08.506762 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd"] Dec 02 14:46:08 crc kubenswrapper[4814]: W1202 14:46:08.511780 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cabeccf_6592_4e14_b15d_bc876d5061f4.slice/crio-adc4a50233ec069eebab23be0cdf4053d0eded4f72e9ae6139789f3a9306780d WatchSource:0}: Error finding container adc4a50233ec069eebab23be0cdf4053d0eded4f72e9ae6139789f3a9306780d: Status 404 returned error can't find the container with id adc4a50233ec069eebab23be0cdf4053d0eded4f72e9ae6139789f3a9306780d Dec 02 14:46:09 crc kubenswrapper[4814]: I1202 14:46:09.168569 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" event={"ID":"0cabeccf-6592-4e14-b15d-bc876d5061f4","Type":"ContainerStarted","Data":"adb676b4df0aeff7aaf131f8d54ef6675e1590ec93d45fb5f7a821cc9a311687"} Dec 02 14:46:09 crc kubenswrapper[4814]: I1202 14:46:09.168622 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" event={"ID":"0cabeccf-6592-4e14-b15d-bc876d5061f4","Type":"ContainerStarted","Data":"adc4a50233ec069eebab23be0cdf4053d0eded4f72e9ae6139789f3a9306780d"} Dec 02 14:46:09 crc kubenswrapper[4814]: I1202 14:46:09.168694 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" Dec 02 14:46:09 crc kubenswrapper[4814]: I1202 14:46:09.200574 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" podStartSLOduration=2.200548018 podStartE2EDuration="2.200548018s" podCreationTimestamp="2025-12-02 14:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:46:09.1901982 +0000 UTC m=+1031.314560414" watchObservedRunningTime="2025-12-02 14:46:09.200548018 +0000 UTC m=+1031.324910222" Dec 02 14:46:18 crc kubenswrapper[4814]: I1202 14:46:18.040147 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-987c4dd49-4xbpd" Dec 02 14:46:18 crc kubenswrapper[4814]: I1202 14:46:18.103337 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c"] Dec 02 14:46:18 crc kubenswrapper[4814]: I1202 14:46:18.103576 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" podUID="ecbecca6-4535-4f20-bc81-6a57a77c66cc" containerName="manager" containerID="cri-o://30a5a8699355741e413f7570defbd95ab5c3198eab270525adebdd21114613a9" gracePeriod=10 Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.131564 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.228218 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ecbecca6-4535-4f20-bc81-6a57a77c66cc-apiservice-cert\") pod \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\" (UID: \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\") " Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.228316 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ecbecca6-4535-4f20-bc81-6a57a77c66cc-webhook-cert\") pod \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\" (UID: \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\") " Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.228383 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfjbg\" (UniqueName: \"kubernetes.io/projected/ecbecca6-4535-4f20-bc81-6a57a77c66cc-kube-api-access-rfjbg\") pod \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\" (UID: \"ecbecca6-4535-4f20-bc81-6a57a77c66cc\") " Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.234404 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbecca6-4535-4f20-bc81-6a57a77c66cc-kube-api-access-rfjbg" (OuterVolumeSpecName: "kube-api-access-rfjbg") pod "ecbecca6-4535-4f20-bc81-6a57a77c66cc" (UID: "ecbecca6-4535-4f20-bc81-6a57a77c66cc"). InnerVolumeSpecName "kube-api-access-rfjbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.234518 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbecca6-4535-4f20-bc81-6a57a77c66cc-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ecbecca6-4535-4f20-bc81-6a57a77c66cc" (UID: "ecbecca6-4535-4f20-bc81-6a57a77c66cc"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.241558 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbecca6-4535-4f20-bc81-6a57a77c66cc-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "ecbecca6-4535-4f20-bc81-6a57a77c66cc" (UID: "ecbecca6-4535-4f20-bc81-6a57a77c66cc"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.249090 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.249125 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" event={"ID":"ecbecca6-4535-4f20-bc81-6a57a77c66cc","Type":"ContainerDied","Data":"30a5a8699355741e413f7570defbd95ab5c3198eab270525adebdd21114613a9"} Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.249176 4814 scope.go:117] "RemoveContainer" containerID="30a5a8699355741e413f7570defbd95ab5c3198eab270525adebdd21114613a9" Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.249177 4814 generic.go:334] "Generic (PLEG): container finished" podID="ecbecca6-4535-4f20-bc81-6a57a77c66cc" containerID="30a5a8699355741e413f7570defbd95ab5c3198eab270525adebdd21114613a9" exitCode=0 Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.249203 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c" event={"ID":"ecbecca6-4535-4f20-bc81-6a57a77c66cc","Type":"ContainerDied","Data":"057f42a97ef9e62c489d568d948b7203319a4bb620964bce06175735c83fefed"} Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.299148 4814 scope.go:117] "RemoveContainer" containerID="30a5a8699355741e413f7570defbd95ab5c3198eab270525adebdd21114613a9" Dec 02 14:46:19 crc kubenswrapper[4814]: E1202 14:46:19.299678 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30a5a8699355741e413f7570defbd95ab5c3198eab270525adebdd21114613a9\": container with ID starting with 30a5a8699355741e413f7570defbd95ab5c3198eab270525adebdd21114613a9 not found: ID does not exist" containerID="30a5a8699355741e413f7570defbd95ab5c3198eab270525adebdd21114613a9" Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.299728 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30a5a8699355741e413f7570defbd95ab5c3198eab270525adebdd21114613a9"} err="failed to get container status \"30a5a8699355741e413f7570defbd95ab5c3198eab270525adebdd21114613a9\": rpc error: code = NotFound desc = could not find container \"30a5a8699355741e413f7570defbd95ab5c3198eab270525adebdd21114613a9\": container with ID starting with 30a5a8699355741e413f7570defbd95ab5c3198eab270525adebdd21114613a9 not found: ID does not exist" Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.304246 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c"] Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.309184 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-58b8584d74-mgj2c"] Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.329886 4814 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ecbecca6-4535-4f20-bc81-6a57a77c66cc-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.329911 4814 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ecbecca6-4535-4f20-bc81-6a57a77c66cc-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:46:19 crc kubenswrapper[4814]: I1202 14:46:19.329924 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfjbg\" (UniqueName: \"kubernetes.io/projected/ecbecca6-4535-4f20-bc81-6a57a77c66cc-kube-api-access-rfjbg\") on node \"crc\" DevicePath \"\"" Dec 02 14:46:20 crc kubenswrapper[4814]: I1202 14:46:20.160345 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecbecca6-4535-4f20-bc81-6a57a77c66cc" path="/var/lib/kubelet/pods/ecbecca6-4535-4f20-bc81-6a57a77c66cc/volumes" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.835724 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Dec 02 14:46:29 crc kubenswrapper[4814]: E1202 14:46:29.836564 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbecca6-4535-4f20-bc81-6a57a77c66cc" containerName="manager" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.836576 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbecca6-4535-4f20-bc81-6a57a77c66cc" containerName="manager" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.836740 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbecca6-4535-4f20-bc81-6a57a77c66cc" containerName="manager" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.844284 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.848092 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-dockercfg-6rjhl" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.848174 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"kube-root-ca.crt" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.849312 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-plugins-conf" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.849545 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-default-user" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.849651 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-config-data" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.859017 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-notifications-svc" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.859099 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-erlang-cookie" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.860288 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-conf" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.866460 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openshift-service-ca.crt" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.874329 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.898082 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-44891d58-aeab-48df-8241-caac6ab0318e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-44891d58-aeab-48df-8241-caac6ab0318e\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.898390 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46b61b10-de07-41cf-9fc0-03b5fc067b56-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.898525 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46b61b10-de07-41cf-9fc0-03b5fc067b56-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.898663 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46b61b10-de07-41cf-9fc0-03b5fc067b56-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.898783 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46b61b10-de07-41cf-9fc0-03b5fc067b56-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.898911 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46b61b10-de07-41cf-9fc0-03b5fc067b56-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.898948 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46b61b10-de07-41cf-9fc0-03b5fc067b56-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.898983 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46b61b10-de07-41cf-9fc0-03b5fc067b56-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.899014 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46b61b10-de07-41cf-9fc0-03b5fc067b56-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.899039 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46b61b10-de07-41cf-9fc0-03b5fc067b56-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:29 crc kubenswrapper[4814]: I1202 14:46:29.899073 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h68jn\" (UniqueName: \"kubernetes.io/projected/46b61b10-de07-41cf-9fc0-03b5fc067b56-kube-api-access-h68jn\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.002285 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46b61b10-de07-41cf-9fc0-03b5fc067b56-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.002354 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46b61b10-de07-41cf-9fc0-03b5fc067b56-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.002381 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46b61b10-de07-41cf-9fc0-03b5fc067b56-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.002401 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46b61b10-de07-41cf-9fc0-03b5fc067b56-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.002427 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46b61b10-de07-41cf-9fc0-03b5fc067b56-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.002456 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46b61b10-de07-41cf-9fc0-03b5fc067b56-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.002484 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46b61b10-de07-41cf-9fc0-03b5fc067b56-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.002512 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h68jn\" (UniqueName: \"kubernetes.io/projected/46b61b10-de07-41cf-9fc0-03b5fc067b56-kube-api-access-h68jn\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.002579 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-44891d58-aeab-48df-8241-caac6ab0318e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-44891d58-aeab-48df-8241-caac6ab0318e\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.002607 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46b61b10-de07-41cf-9fc0-03b5fc067b56-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.002642 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46b61b10-de07-41cf-9fc0-03b5fc067b56-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.003357 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46b61b10-de07-41cf-9fc0-03b5fc067b56-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.004131 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46b61b10-de07-41cf-9fc0-03b5fc067b56-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.004385 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46b61b10-de07-41cf-9fc0-03b5fc067b56-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.004431 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46b61b10-de07-41cf-9fc0-03b5fc067b56-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.005533 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46b61b10-de07-41cf-9fc0-03b5fc067b56-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.009032 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46b61b10-de07-41cf-9fc0-03b5fc067b56-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.019735 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46b61b10-de07-41cf-9fc0-03b5fc067b56-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.019860 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46b61b10-de07-41cf-9fc0-03b5fc067b56-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.020100 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46b61b10-de07-41cf-9fc0-03b5fc067b56-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.025322 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h68jn\" (UniqueName: \"kubernetes.io/projected/46b61b10-de07-41cf-9fc0-03b5fc067b56-kube-api-access-h68jn\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.025930 4814 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.025959 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-44891d58-aeab-48df-8241-caac6ab0318e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-44891d58-aeab-48df-8241-caac6ab0318e\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9b87e71f8380b860228ff706a1f8601b85c66ee8e266e31d1229b3de1c5c08ae/globalmount\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.051470 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-44891d58-aeab-48df-8241-caac6ab0318e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-44891d58-aeab-48df-8241-caac6ab0318e\") pod \"rabbitmq-notifications-server-0\" (UID: \"46b61b10-de07-41cf-9fc0-03b5fc067b56\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.164254 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.531310 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.533074 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.535827 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-server-conf" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.535881 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-config-data" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.535827 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-plugins-conf" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.536187 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-server-dockercfg-swgjg" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.536425 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-svc" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.537027 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-default-user" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.537794 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-erlang-cookie" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.558949 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.618863 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7504d82f-f8f4-4784-ae11-85688085e352-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.618921 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7504d82f-f8f4-4784-ae11-85688085e352-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.618947 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7504d82f-f8f4-4784-ae11-85688085e352-config-data\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.618976 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7504d82f-f8f4-4784-ae11-85688085e352-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.619011 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7504d82f-f8f4-4784-ae11-85688085e352-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.619106 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7504d82f-f8f4-4784-ae11-85688085e352-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.619148 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7504d82f-f8f4-4784-ae11-85688085e352-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.619207 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7504d82f-f8f4-4784-ae11-85688085e352-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.619271 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-05c48240-2ea2-4ff9-adaa-f8b3bc1a84ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05c48240-2ea2-4ff9-adaa-f8b3bc1a84ea\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.619295 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7504d82f-f8f4-4784-ae11-85688085e352-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.619411 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvl8b\" (UniqueName: \"kubernetes.io/projected/7504d82f-f8f4-4784-ae11-85688085e352-kube-api-access-jvl8b\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.652526 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.720871 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7504d82f-f8f4-4784-ae11-85688085e352-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.720953 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7504d82f-f8f4-4784-ae11-85688085e352-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.720999 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-05c48240-2ea2-4ff9-adaa-f8b3bc1a84ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05c48240-2ea2-4ff9-adaa-f8b3bc1a84ea\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.721025 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7504d82f-f8f4-4784-ae11-85688085e352-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.721052 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvl8b\" (UniqueName: \"kubernetes.io/projected/7504d82f-f8f4-4784-ae11-85688085e352-kube-api-access-jvl8b\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.721083 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7504d82f-f8f4-4784-ae11-85688085e352-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.721102 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7504d82f-f8f4-4784-ae11-85688085e352-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.721127 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7504d82f-f8f4-4784-ae11-85688085e352-config-data\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.721152 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7504d82f-f8f4-4784-ae11-85688085e352-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.721185 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7504d82f-f8f4-4784-ae11-85688085e352-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.721292 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7504d82f-f8f4-4784-ae11-85688085e352-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.722687 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7504d82f-f8f4-4784-ae11-85688085e352-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.723003 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7504d82f-f8f4-4784-ae11-85688085e352-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.725570 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7504d82f-f8f4-4784-ae11-85688085e352-config-data\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.725835 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7504d82f-f8f4-4784-ae11-85688085e352-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.726352 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7504d82f-f8f4-4784-ae11-85688085e352-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.728510 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7504d82f-f8f4-4784-ae11-85688085e352-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.728529 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7504d82f-f8f4-4784-ae11-85688085e352-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.729072 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7504d82f-f8f4-4784-ae11-85688085e352-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.731192 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7504d82f-f8f4-4784-ae11-85688085e352-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.736491 4814 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.736532 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-05c48240-2ea2-4ff9-adaa-f8b3bc1a84ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05c48240-2ea2-4ff9-adaa-f8b3bc1a84ea\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fa96afb9d5c053c8b9763c95d8e1fd04b2d2d6cd51622ef73db3ae1291200cc9/globalmount\"" pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.743975 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvl8b\" (UniqueName: \"kubernetes.io/projected/7504d82f-f8f4-4784-ae11-85688085e352-kube-api-access-jvl8b\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.782265 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-05c48240-2ea2-4ff9-adaa-f8b3bc1a84ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05c48240-2ea2-4ff9-adaa-f8b3bc1a84ea\") pod \"rabbitmq-server-0\" (UID: \"7504d82f-f8f4-4784-ae11-85688085e352\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:30 crc kubenswrapper[4814]: I1202 14:46:30.857920 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:46:31 crc kubenswrapper[4814]: I1202 14:46:31.344086 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"46b61b10-de07-41cf-9fc0-03b5fc067b56","Type":"ContainerStarted","Data":"ab28ef5d65bbb33d28f12d5a8b9d7a48b0673c8df0125ac511a3c41f5c612729"} Dec 02 14:46:31 crc kubenswrapper[4814]: I1202 14:46:31.416625 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Dec 02 14:46:31 crc kubenswrapper[4814]: W1202 14:46:31.422508 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7504d82f_f8f4_4784_ae11_85688085e352.slice/crio-640632d85be9826eea8633f394fe9dd2ffb6d9aa7ca8030edd623a873f41eac6 WatchSource:0}: Error finding container 640632d85be9826eea8633f394fe9dd2ffb6d9aa7ca8030edd623a873f41eac6: Status 404 returned error can't find the container with id 640632d85be9826eea8633f394fe9dd2ffb6d9aa7ca8030edd623a873f41eac6 Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.074355 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.076389 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.079141 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config-data" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.079264 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-scripts" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.079341 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"galera-openstack-dockercfg-jb8fl" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.079426 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-galera-openstack-svc" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.088678 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"combined-ca-bundle" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.094482 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.156936 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nn2q\" (UniqueName: \"kubernetes.io/projected/63005150-087b-49a1-8ac9-d9e8aef28cab-kube-api-access-8nn2q\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.156994 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/63005150-087b-49a1-8ac9-d9e8aef28cab-config-data-default\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.157076 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7d3bb85-ee50-4aa8-a06c-5751a6c64272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7d3bb85-ee50-4aa8-a06c-5751a6c64272\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.157179 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63005150-087b-49a1-8ac9-d9e8aef28cab-kolla-config\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.157280 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/63005150-087b-49a1-8ac9-d9e8aef28cab-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.157328 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63005150-087b-49a1-8ac9-d9e8aef28cab-operator-scripts\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.157391 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/63005150-087b-49a1-8ac9-d9e8aef28cab-config-data-generated\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.157587 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63005150-087b-49a1-8ac9-d9e8aef28cab-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.258624 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63005150-087b-49a1-8ac9-d9e8aef28cab-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.258678 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nn2q\" (UniqueName: \"kubernetes.io/projected/63005150-087b-49a1-8ac9-d9e8aef28cab-kube-api-access-8nn2q\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.258709 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/63005150-087b-49a1-8ac9-d9e8aef28cab-config-data-default\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.258744 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7d3bb85-ee50-4aa8-a06c-5751a6c64272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7d3bb85-ee50-4aa8-a06c-5751a6c64272\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.258767 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63005150-087b-49a1-8ac9-d9e8aef28cab-kolla-config\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.258786 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/63005150-087b-49a1-8ac9-d9e8aef28cab-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.258801 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63005150-087b-49a1-8ac9-d9e8aef28cab-operator-scripts\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.258829 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/63005150-087b-49a1-8ac9-d9e8aef28cab-config-data-generated\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.259342 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/63005150-087b-49a1-8ac9-d9e8aef28cab-config-data-generated\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.259723 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63005150-087b-49a1-8ac9-d9e8aef28cab-kolla-config\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.260843 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/63005150-087b-49a1-8ac9-d9e8aef28cab-config-data-default\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.261919 4814 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.261950 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7d3bb85-ee50-4aa8-a06c-5751a6c64272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7d3bb85-ee50-4aa8-a06c-5751a6c64272\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3d8a960912b859c95aaacb0c5cf643ae01c17a2ae84959552cd0cbc691e9d4ae/globalmount\"" pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.264676 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/63005150-087b-49a1-8ac9-d9e8aef28cab-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.266139 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63005150-087b-49a1-8ac9-d9e8aef28cab-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.286349 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nn2q\" (UniqueName: \"kubernetes.io/projected/63005150-087b-49a1-8ac9-d9e8aef28cab-kube-api-access-8nn2q\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.294114 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7d3bb85-ee50-4aa8-a06c-5751a6c64272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7d3bb85-ee50-4aa8-a06c-5751a6c64272\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.354188 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"7504d82f-f8f4-4784-ae11-85688085e352","Type":"ContainerStarted","Data":"640632d85be9826eea8633f394fe9dd2ffb6d9aa7ca8030edd623a873f41eac6"} Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.371030 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63005150-087b-49a1-8ac9-d9e8aef28cab-operator-scripts\") pod \"openstack-galera-0\" (UID: \"63005150-087b-49a1-8ac9-d9e8aef28cab\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.391086 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.392088 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.396536 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-5b6w2" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.396601 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.396844 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.415420 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.418960 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.463446 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64bd5b07-0227-4d38-9275-f1d59d7ce529-config-data\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.463515 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbswz\" (UniqueName: \"kubernetes.io/projected/64bd5b07-0227-4d38-9275-f1d59d7ce529-kube-api-access-bbswz\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.463566 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64bd5b07-0227-4d38-9275-f1d59d7ce529-kolla-config\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.463660 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/64bd5b07-0227-4d38-9275-f1d59d7ce529-memcached-tls-certs\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.463924 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64bd5b07-0227-4d38-9275-f1d59d7ce529-combined-ca-bundle\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.565331 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64bd5b07-0227-4d38-9275-f1d59d7ce529-config-data\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.565691 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbswz\" (UniqueName: \"kubernetes.io/projected/64bd5b07-0227-4d38-9275-f1d59d7ce529-kube-api-access-bbswz\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.565731 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64bd5b07-0227-4d38-9275-f1d59d7ce529-kolla-config\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.565752 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/64bd5b07-0227-4d38-9275-f1d59d7ce529-memcached-tls-certs\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.565799 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64bd5b07-0227-4d38-9275-f1d59d7ce529-combined-ca-bundle\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.566133 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64bd5b07-0227-4d38-9275-f1d59d7ce529-config-data\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.566532 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64bd5b07-0227-4d38-9275-f1d59d7ce529-kolla-config\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.569959 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/64bd5b07-0227-4d38-9275-f1d59d7ce529-memcached-tls-certs\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.570472 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64bd5b07-0227-4d38-9275-f1d59d7ce529-combined-ca-bundle\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.587264 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbswz\" (UniqueName: \"kubernetes.io/projected/64bd5b07-0227-4d38-9275-f1d59d7ce529-kube-api-access-bbswz\") pod \"memcached-0\" (UID: \"64bd5b07-0227-4d38-9275-f1d59d7ce529\") " pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.660016 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.664409 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.667819 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.669430 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"telemetry-ceilometer-dockercfg-z55k8" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.721777 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.768028 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cdc2\" (UniqueName: \"kubernetes.io/projected/b0859259-ecec-4400-8770-ed1a52f94dc4-kube-api-access-2cdc2\") pod \"kube-state-metrics-0\" (UID: \"b0859259-ecec-4400-8770-ed1a52f94dc4\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.869329 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cdc2\" (UniqueName: \"kubernetes.io/projected/b0859259-ecec-4400-8770-ed1a52f94dc4-kube-api-access-2cdc2\") pod \"kube-state-metrics-0\" (UID: \"b0859259-ecec-4400-8770-ed1a52f94dc4\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.899929 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cdc2\" (UniqueName: \"kubernetes.io/projected/b0859259-ecec-4400-8770-ed1a52f94dc4-kube-api-access-2cdc2\") pod \"kube-state-metrics-0\" (UID: \"b0859259-ecec-4400-8770-ed1a52f94dc4\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.935359 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Dec 02 14:46:32 crc kubenswrapper[4814]: I1202 14:46:32.993375 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:46:33 crc kubenswrapper[4814]: W1202 14:46:33.013473 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63005150_087b_49a1_8ac9_d9e8aef28cab.slice/crio-7e7495a2dc66a4e2f32d516f5b3e6adc028cecfadab105d17dc19d6278ebc09f WatchSource:0}: Error finding container 7e7495a2dc66a4e2f32d516f5b3e6adc028cecfadab105d17dc19d6278ebc09f: Status 404 returned error can't find the container with id 7e7495a2dc66a4e2f32d516f5b3e6adc028cecfadab105d17dc19d6278ebc09f Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.371433 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.373815 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.382932 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-generated" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.383028 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-tls-assets-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.383489 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-alertmanager-dockercfg-dfjkn" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.383650 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-web-config" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.384263 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-cluster-tls-config" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.393464 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"63005150-087b-49a1-8ac9-d9e8aef28cab","Type":"ContainerStarted","Data":"7e7495a2dc66a4e2f32d516f5b3e6adc028cecfadab105d17dc19d6278ebc09f"} Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.396644 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.481442 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.481691 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tbjz\" (UniqueName: \"kubernetes.io/projected/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-kube-api-access-6tbjz\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.481725 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.481743 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.481783 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.481814 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.481835 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.548858 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.583534 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tbjz\" (UniqueName: \"kubernetes.io/projected/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-kube-api-access-6tbjz\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.583604 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.583648 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.583691 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.583722 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.583744 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.583793 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.584944 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.588394 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.594450 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.596008 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.598472 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.610285 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.615287 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tbjz\" (UniqueName: \"kubernetes.io/projected/5e2f4172-7a16-4b44-9a2d-796ea4969ae9-kube-api-access-6tbjz\") pod \"alertmanager-metric-storage-0\" (UID: \"5e2f4172-7a16-4b44-9a2d-796ea4969ae9\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.709183 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.720962 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.751293 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg"] Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.752532 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.762010 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-mndzn" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.762205 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.773840 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg"] Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.790909 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-kk8rg\" (UID: \"ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.790965 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2xp9\" (UniqueName: \"kubernetes.io/projected/ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f-kube-api-access-c2xp9\") pod \"observability-ui-dashboards-7d5fb4cbfb-kk8rg\" (UID: \"ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.893791 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-kk8rg\" (UID: \"ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg" Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.894088 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2xp9\" (UniqueName: \"kubernetes.io/projected/ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f-kube-api-access-c2xp9\") pod \"observability-ui-dashboards-7d5fb4cbfb-kk8rg\" (UID: \"ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg" Dec 02 14:46:33 crc kubenswrapper[4814]: E1202 14:46:33.894442 4814 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Dec 02 14:46:33 crc kubenswrapper[4814]: E1202 14:46:33.894487 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f-serving-cert podName:ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f nodeName:}" failed. No retries permitted until 2025-12-02 14:46:34.394472377 +0000 UTC m=+1056.518834581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f-serving-cert") pod "observability-ui-dashboards-7d5fb4cbfb-kk8rg" (UID: "ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f") : secret "observability-ui-dashboards" not found Dec 02 14:46:33 crc kubenswrapper[4814]: I1202 14:46:33.932994 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2xp9\" (UniqueName: \"kubernetes.io/projected/ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f-kube-api-access-c2xp9\") pod \"observability-ui-dashboards-7d5fb4cbfb-kk8rg\" (UID: \"ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.027322 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.029075 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.034602 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-88898" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.035028 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.035168 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.035410 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.035537 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.045391 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.059450 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.097286 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.097332 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.097370 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.097388 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/599f7de4-aead-4401-a25c-59c97b390339-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.097406 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/599f7de4-aead-4401-a25c-59c97b390339-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.097440 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-config\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.097482 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzvnf\" (UniqueName: \"kubernetes.io/projected/599f7de4-aead-4401-a25c-59c97b390339-kube-api-access-xzvnf\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.097501 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/599f7de4-aead-4401-a25c-59c97b390339-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.100194 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-54fcb48cc4-s9cs9"] Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.107569 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.128546 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-54fcb48cc4-s9cs9"] Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.200195 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-config\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.200641 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzvnf\" (UniqueName: \"kubernetes.io/projected/599f7de4-aead-4401-a25c-59c97b390339-kube-api-access-xzvnf\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.200723 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/599f7de4-aead-4401-a25c-59c97b390339-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.200793 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51873d67-f777-49ac-bd6f-7caa17d04580-service-ca\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.200905 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.200955 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.200980 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51873d67-f777-49ac-bd6f-7caa17d04580-console-oauth-config\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.200998 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndqpg\" (UniqueName: \"kubernetes.io/projected/51873d67-f777-49ac-bd6f-7caa17d04580-kube-api-access-ndqpg\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.201026 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51873d67-f777-49ac-bd6f-7caa17d04580-console-serving-cert\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.201076 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.201095 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51873d67-f777-49ac-bd6f-7caa17d04580-oauth-serving-cert\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.201116 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/599f7de4-aead-4401-a25c-59c97b390339-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.201144 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/599f7de4-aead-4401-a25c-59c97b390339-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.201160 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51873d67-f777-49ac-bd6f-7caa17d04580-console-config\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.201195 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51873d67-f777-49ac-bd6f-7caa17d04580-trusted-ca-bundle\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: E1202 14:46:34.201683 4814 configmap.go:193] Couldn't get configMap watcher-kuttl-default/prometheus-metric-storage-rulefiles-0: configmap "prometheus-metric-storage-rulefiles-0" not found Dec 02 14:46:34 crc kubenswrapper[4814]: E1202 14:46:34.201728 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/599f7de4-aead-4401-a25c-59c97b390339-prometheus-metric-storage-rulefiles-0 podName:599f7de4-aead-4401-a25c-59c97b390339 nodeName:}" failed. No retries permitted until 2025-12-02 14:46:34.701715502 +0000 UTC m=+1056.826077706 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-metric-storage-rulefiles-0" (UniqueName: "kubernetes.io/configmap/599f7de4-aead-4401-a25c-59c97b390339-prometheus-metric-storage-rulefiles-0") pod "prometheus-metric-storage-0" (UID: "599f7de4-aead-4401-a25c-59c97b390339") : configmap "prometheus-metric-storage-rulefiles-0" not found Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.225922 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.227076 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.230215 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-config\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.231612 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/599f7de4-aead-4401-a25c-59c97b390339-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.234915 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/599f7de4-aead-4401-a25c-59c97b390339-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.241396 4814 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.241626 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c53988640793b331e909f437f491a3f424308ec4a276ce6541ad4095c8223a9e/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.243728 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzvnf\" (UniqueName: \"kubernetes.io/projected/599f7de4-aead-4401-a25c-59c97b390339-kube-api-access-xzvnf\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.288730 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.302024 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51873d67-f777-49ac-bd6f-7caa17d04580-console-oauth-config\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.302069 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndqpg\" (UniqueName: \"kubernetes.io/projected/51873d67-f777-49ac-bd6f-7caa17d04580-kube-api-access-ndqpg\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.302089 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51873d67-f777-49ac-bd6f-7caa17d04580-console-serving-cert\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.302117 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51873d67-f777-49ac-bd6f-7caa17d04580-oauth-serving-cert\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.302148 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51873d67-f777-49ac-bd6f-7caa17d04580-console-config\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.302173 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51873d67-f777-49ac-bd6f-7caa17d04580-trusted-ca-bundle\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.302286 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51873d67-f777-49ac-bd6f-7caa17d04580-service-ca\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.303153 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51873d67-f777-49ac-bd6f-7caa17d04580-service-ca\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.303785 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51873d67-f777-49ac-bd6f-7caa17d04580-oauth-serving-cert\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.304036 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51873d67-f777-49ac-bd6f-7caa17d04580-console-config\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.305019 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51873d67-f777-49ac-bd6f-7caa17d04580-trusted-ca-bundle\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.312140 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51873d67-f777-49ac-bd6f-7caa17d04580-console-serving-cert\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.322104 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51873d67-f777-49ac-bd6f-7caa17d04580-console-oauth-config\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.348248 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndqpg\" (UniqueName: \"kubernetes.io/projected/51873d67-f777-49ac-bd6f-7caa17d04580-kube-api-access-ndqpg\") pod \"console-54fcb48cc4-s9cs9\" (UID: \"51873d67-f777-49ac-bd6f-7caa17d04580\") " pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.385089 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.404194 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-kk8rg\" (UID: \"ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg" Dec 02 14:46:34 crc kubenswrapper[4814]: W1202 14:46:34.405552 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e2f4172_7a16_4b44_9a2d_796ea4969ae9.slice/crio-1f6b2dde5b2c363a14c2c2435f8b134df7a137bcdace3ce92cb7bcdb581a37a2 WatchSource:0}: Error finding container 1f6b2dde5b2c363a14c2c2435f8b134df7a137bcdace3ce92cb7bcdb581a37a2: Status 404 returned error can't find the container with id 1f6b2dde5b2c363a14c2c2435f8b134df7a137bcdace3ce92cb7bcdb581a37a2 Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.411138 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-kk8rg\" (UID: \"ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.425707 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"b0859259-ecec-4400-8770-ed1a52f94dc4","Type":"ContainerStarted","Data":"bc9e85b55bb6c8f5b0557307ad9cf6d14eebf127fa9528cf3c560e7f8411a02d"} Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.430395 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"64bd5b07-0227-4d38-9275-f1d59d7ce529","Type":"ContainerStarted","Data":"e704bcc768feec6a2965a07ae7030c204fa266af52c8f3a63d50ba2ca412b58f"} Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.479912 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.698551 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.712882 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/599f7de4-aead-4401-a25c-59c97b390339-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.713957 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/599f7de4-aead-4401-a25c-59c97b390339-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.963724 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-54fcb48cc4-s9cs9"] Dec 02 14:46:34 crc kubenswrapper[4814]: I1202 14:46:34.996384 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:46:35 crc kubenswrapper[4814]: I1202 14:46:35.261546 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg"] Dec 02 14:46:35 crc kubenswrapper[4814]: I1202 14:46:35.439256 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54fcb48cc4-s9cs9" event={"ID":"51873d67-f777-49ac-bd6f-7caa17d04580","Type":"ContainerStarted","Data":"4047e348f45def0a184cfef15550536bb8fd22d657aa01d9171fef32861b92cb"} Dec 02 14:46:35 crc kubenswrapper[4814]: I1202 14:46:35.441573 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5e2f4172-7a16-4b44-9a2d-796ea4969ae9","Type":"ContainerStarted","Data":"1f6b2dde5b2c363a14c2c2435f8b134df7a137bcdace3ce92cb7bcdb581a37a2"} Dec 02 14:46:35 crc kubenswrapper[4814]: I1202 14:46:35.446437 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg" event={"ID":"ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f","Type":"ContainerStarted","Data":"48c77a78be22675e013538680a8911b54f7733aa06a98f521dade337b1dfcfde"} Dec 02 14:46:35 crc kubenswrapper[4814]: I1202 14:46:35.709083 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 14:46:36 crc kubenswrapper[4814]: I1202 14:46:36.460367 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"599f7de4-aead-4401-a25c-59c97b390339","Type":"ContainerStarted","Data":"e5fb1d1cfc4aa31e0967797157df86a11324905c515e6802e073d4297a54cb5e"} Dec 02 14:46:38 crc kubenswrapper[4814]: I1202 14:46:38.479601 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54fcb48cc4-s9cs9" event={"ID":"51873d67-f777-49ac-bd6f-7caa17d04580","Type":"ContainerStarted","Data":"1e41e5c0b583f54e2015b941a15c31500d3dc6ff67c63daa5ba0820a543687ad"} Dec 02 14:46:38 crc kubenswrapper[4814]: I1202 14:46:38.496884 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-54fcb48cc4-s9cs9" podStartSLOduration=4.496866567 podStartE2EDuration="4.496866567s" podCreationTimestamp="2025-12-02 14:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:46:38.493711289 +0000 UTC m=+1060.618073513" watchObservedRunningTime="2025-12-02 14:46:38.496866567 +0000 UTC m=+1060.621228771" Dec 02 14:46:44 crc kubenswrapper[4814]: I1202 14:46:44.480622 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:44 crc kubenswrapper[4814]: I1202 14:46:44.481205 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:44 crc kubenswrapper[4814]: I1202 14:46:44.486009 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:44 crc kubenswrapper[4814]: I1202 14:46:44.536559 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-54fcb48cc4-s9cs9" Dec 02 14:46:44 crc kubenswrapper[4814]: I1202 14:46:44.601535 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7f988dc77-nvn6q"] Dec 02 14:46:51 crc kubenswrapper[4814]: I1202 14:46:51.599093 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"63005150-087b-49a1-8ac9-d9e8aef28cab","Type":"ContainerStarted","Data":"2fe687f3e4d4da84934d8a7daa73118a3058a62fa70b86de43670b9c9e2401ee"} Dec 02 14:46:51 crc kubenswrapper[4814]: I1202 14:46:51.603767 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg" event={"ID":"ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f","Type":"ContainerStarted","Data":"ee2a666699324eaaec21fcd06436e5ed88028a2fea87a52890305b7fd49e6bcb"} Dec 02 14:46:51 crc kubenswrapper[4814]: I1202 14:46:51.658640 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-kk8rg" podStartSLOduration=3.299807661 podStartE2EDuration="18.658616705s" podCreationTimestamp="2025-12-02 14:46:33 +0000 UTC" firstStartedPulling="2025-12-02 14:46:35.296465807 +0000 UTC m=+1057.420828011" lastFinishedPulling="2025-12-02 14:46:50.655274851 +0000 UTC m=+1072.779637055" observedRunningTime="2025-12-02 14:46:51.653475567 +0000 UTC m=+1073.777837771" watchObservedRunningTime="2025-12-02 14:46:51.658616705 +0000 UTC m=+1073.782978899" Dec 02 14:46:52 crc kubenswrapper[4814]: I1202 14:46:52.615455 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"64bd5b07-0227-4d38-9275-f1d59d7ce529","Type":"ContainerStarted","Data":"ddf5261d044c1071399ee11c2b70eb8d2bb3462fb8db1fcc0535f399a7ab4637"} Dec 02 14:46:52 crc kubenswrapper[4814]: I1202 14:46:52.658169 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=2.999468667 podStartE2EDuration="20.658150324s" podCreationTimestamp="2025-12-02 14:46:32 +0000 UTC" firstStartedPulling="2025-12-02 14:46:33.547171901 +0000 UTC m=+1055.671534105" lastFinishedPulling="2025-12-02 14:46:51.205853558 +0000 UTC m=+1073.330215762" observedRunningTime="2025-12-02 14:46:52.655257942 +0000 UTC m=+1074.779620166" watchObservedRunningTime="2025-12-02 14:46:52.658150324 +0000 UTC m=+1074.782512528" Dec 02 14:46:52 crc kubenswrapper[4814]: I1202 14:46:52.722840 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:53 crc kubenswrapper[4814]: I1202 14:46:53.626847 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"46b61b10-de07-41cf-9fc0-03b5fc067b56","Type":"ContainerStarted","Data":"02ae3b2db13001840796c24fc81555b41ac7da3cef420dac906b08da9f5b2fd9"} Dec 02 14:46:53 crc kubenswrapper[4814]: I1202 14:46:53.629321 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"7504d82f-f8f4-4784-ae11-85688085e352","Type":"ContainerStarted","Data":"2686039f5634379aafc9351a508e5ae79563f801244d4ea6c033e6901453b4ab"} Dec 02 14:46:54 crc kubenswrapper[4814]: I1202 14:46:54.641066 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"599f7de4-aead-4401-a25c-59c97b390339","Type":"ContainerStarted","Data":"11d0555eacf719819342f91488a7aa1e098a9790ac9a67d4738a73e3f29dda6f"} Dec 02 14:46:54 crc kubenswrapper[4814]: I1202 14:46:54.643719 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5e2f4172-7a16-4b44-9a2d-796ea4969ae9","Type":"ContainerStarted","Data":"49f88353f730eaea75c6542320e46cf60bc91ad10afad191f879d67fc5c0de02"} Dec 02 14:46:55 crc kubenswrapper[4814]: I1202 14:46:55.652984 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"b0859259-ecec-4400-8770-ed1a52f94dc4","Type":"ContainerStarted","Data":"fe57edb25acfe3cffef930fdfbca2eb7cbe89d239fef03c34db9ed1a3c3c082f"} Dec 02 14:46:55 crc kubenswrapper[4814]: I1202 14:46:55.673137 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=2.93185809 podStartE2EDuration="23.673117508s" podCreationTimestamp="2025-12-02 14:46:32 +0000 UTC" firstStartedPulling="2025-12-02 14:46:33.854778156 +0000 UTC m=+1055.979140350" lastFinishedPulling="2025-12-02 14:46:54.596037564 +0000 UTC m=+1076.720399768" observedRunningTime="2025-12-02 14:46:55.669418615 +0000 UTC m=+1077.793780839" watchObservedRunningTime="2025-12-02 14:46:55.673117508 +0000 UTC m=+1077.797479712" Dec 02 14:46:56 crc kubenswrapper[4814]: I1202 14:46:56.660139 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:46:57 crc kubenswrapper[4814]: I1202 14:46:57.667860 4814 generic.go:334] "Generic (PLEG): container finished" podID="63005150-087b-49a1-8ac9-d9e8aef28cab" containerID="2fe687f3e4d4da84934d8a7daa73118a3058a62fa70b86de43670b9c9e2401ee" exitCode=0 Dec 02 14:46:57 crc kubenswrapper[4814]: I1202 14:46:57.667919 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"63005150-087b-49a1-8ac9-d9e8aef28cab","Type":"ContainerDied","Data":"2fe687f3e4d4da84934d8a7daa73118a3058a62fa70b86de43670b9c9e2401ee"} Dec 02 14:46:57 crc kubenswrapper[4814]: I1202 14:46:57.724082 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Dec 02 14:46:58 crc kubenswrapper[4814]: I1202 14:46:58.678662 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"63005150-087b-49a1-8ac9-d9e8aef28cab","Type":"ContainerStarted","Data":"750c553d3cd8e39cf80962c5c3f0256dee6c7680fde28a6b6ee91a6cbe728558"} Dec 02 14:46:58 crc kubenswrapper[4814]: I1202 14:46:58.701706 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstack-galera-0" podStartSLOduration=9.964423133 podStartE2EDuration="27.701686331s" podCreationTimestamp="2025-12-02 14:46:31 +0000 UTC" firstStartedPulling="2025-12-02 14:46:33.098195229 +0000 UTC m=+1055.222557433" lastFinishedPulling="2025-12-02 14:46:50.835458397 +0000 UTC m=+1072.959820631" observedRunningTime="2025-12-02 14:46:58.697496236 +0000 UTC m=+1080.821858450" watchObservedRunningTime="2025-12-02 14:46:58.701686331 +0000 UTC m=+1080.826048535" Dec 02 14:47:00 crc kubenswrapper[4814]: I1202 14:47:00.700275 4814 generic.go:334] "Generic (PLEG): container finished" podID="5e2f4172-7a16-4b44-9a2d-796ea4969ae9" containerID="49f88353f730eaea75c6542320e46cf60bc91ad10afad191f879d67fc5c0de02" exitCode=0 Dec 02 14:47:00 crc kubenswrapper[4814]: I1202 14:47:00.700429 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5e2f4172-7a16-4b44-9a2d-796ea4969ae9","Type":"ContainerDied","Data":"49f88353f730eaea75c6542320e46cf60bc91ad10afad191f879d67fc5c0de02"} Dec 02 14:47:01 crc kubenswrapper[4814]: I1202 14:47:01.709368 4814 generic.go:334] "Generic (PLEG): container finished" podID="599f7de4-aead-4401-a25c-59c97b390339" containerID="11d0555eacf719819342f91488a7aa1e098a9790ac9a67d4738a73e3f29dda6f" exitCode=0 Dec 02 14:47:01 crc kubenswrapper[4814]: I1202 14:47:01.709405 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"599f7de4-aead-4401-a25c-59c97b390339","Type":"ContainerDied","Data":"11d0555eacf719819342f91488a7aa1e098a9790ac9a67d4738a73e3f29dda6f"} Dec 02 14:47:02 crc kubenswrapper[4814]: I1202 14:47:02.419487 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:47:02 crc kubenswrapper[4814]: I1202 14:47:02.419763 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:47:03 crc kubenswrapper[4814]: I1202 14:47:03.005061 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:47:05 crc kubenswrapper[4814]: I1202 14:47:05.787833 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5e2f4172-7a16-4b44-9a2d-796ea4969ae9","Type":"ContainerStarted","Data":"968b15a54fd0828b8c569bb2dc0a124be968fa21f1e4f5350f2520299e2de93d"} Dec 02 14:47:07 crc kubenswrapper[4814]: I1202 14:47:07.020035 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:47:07 crc kubenswrapper[4814]: I1202 14:47:07.293123 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/openstack-galera-0" Dec 02 14:47:08 crc kubenswrapper[4814]: I1202 14:47:08.812631 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5e2f4172-7a16-4b44-9a2d-796ea4969ae9","Type":"ContainerStarted","Data":"b4b4b6b7d2533135327ceb8cb3db864dcfcb7c6210ad287f22b8256ff2f8407d"} Dec 02 14:47:08 crc kubenswrapper[4814]: I1202 14:47:08.813382 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:47:08 crc kubenswrapper[4814]: I1202 14:47:08.816100 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 14:47:08 crc kubenswrapper[4814]: I1202 14:47:08.835292 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/alertmanager-metric-storage-0" podStartSLOduration=4.805018856 podStartE2EDuration="35.835270355s" podCreationTimestamp="2025-12-02 14:46:33 +0000 UTC" firstStartedPulling="2025-12-02 14:46:34.427183428 +0000 UTC m=+1056.551545632" lastFinishedPulling="2025-12-02 14:47:05.457434927 +0000 UTC m=+1087.581797131" observedRunningTime="2025-12-02 14:47:08.830832424 +0000 UTC m=+1090.955194628" watchObservedRunningTime="2025-12-02 14:47:08.835270355 +0000 UTC m=+1090.959632559" Dec 02 14:47:09 crc kubenswrapper[4814]: I1202 14:47:09.658039 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7f988dc77-nvn6q" podUID="1d3a481c-e450-4d5b-a5a2-54eac6642566" containerName="console" containerID="cri-o://151c14fe5549b053799dadc161fbd4771d2a94df0d80d0cad069556cf76636ff" gracePeriod=15 Dec 02 14:47:10 crc kubenswrapper[4814]: I1202 14:47:10.830330 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7f988dc77-nvn6q_1d3a481c-e450-4d5b-a5a2-54eac6642566/console/0.log" Dec 02 14:47:10 crc kubenswrapper[4814]: I1202 14:47:10.830387 4814 generic.go:334] "Generic (PLEG): container finished" podID="1d3a481c-e450-4d5b-a5a2-54eac6642566" containerID="151c14fe5549b053799dadc161fbd4771d2a94df0d80d0cad069556cf76636ff" exitCode=2 Dec 02 14:47:10 crc kubenswrapper[4814]: I1202 14:47:10.830493 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f988dc77-nvn6q" event={"ID":"1d3a481c-e450-4d5b-a5a2-54eac6642566","Type":"ContainerDied","Data":"151c14fe5549b053799dadc161fbd4771d2a94df0d80d0cad069556cf76636ff"} Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.308395 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7f988dc77-nvn6q_1d3a481c-e450-4d5b-a5a2-54eac6642566/console/0.log" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.308832 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.324312 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx"] Dec 02 14:47:12 crc kubenswrapper[4814]: E1202 14:47:12.324835 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3a481c-e450-4d5b-a5a2-54eac6642566" containerName="console" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.324900 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3a481c-e450-4d5b-a5a2-54eac6642566" containerName="console" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.325163 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3a481c-e450-4d5b-a5a2-54eac6642566" containerName="console" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.326070 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.327534 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-db-secret" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.349273 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx"] Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.408787 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-service-ca\") pod \"1d3a481c-e450-4d5b-a5a2-54eac6642566\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.408855 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-trusted-ca-bundle\") pod \"1d3a481c-e450-4d5b-a5a2-54eac6642566\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.408909 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-oauth-serving-cert\") pod \"1d3a481c-e450-4d5b-a5a2-54eac6642566\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.408970 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp5lz\" (UniqueName: \"kubernetes.io/projected/1d3a481c-e450-4d5b-a5a2-54eac6642566-kube-api-access-hp5lz\") pod \"1d3a481c-e450-4d5b-a5a2-54eac6642566\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.409028 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-serving-cert\") pod \"1d3a481c-e450-4d5b-a5a2-54eac6642566\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.409099 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-oauth-config\") pod \"1d3a481c-e450-4d5b-a5a2-54eac6642566\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.409157 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-config\") pod \"1d3a481c-e450-4d5b-a5a2-54eac6642566\" (UID: \"1d3a481c-e450-4d5b-a5a2-54eac6642566\") " Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.409593 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ckf8\" (UniqueName: \"kubernetes.io/projected/d24632ef-0280-4438-91ba-0a50b7495837-kube-api-access-6ckf8\") pod \"keystone-29ba-account-create-update-7lxtx\" (UID: \"d24632ef-0280-4438-91ba-0a50b7495837\") " pod="watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.409676 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d24632ef-0280-4438-91ba-0a50b7495837-operator-scripts\") pod \"keystone-29ba-account-create-update-7lxtx\" (UID: \"d24632ef-0280-4438-91ba-0a50b7495837\") " pod="watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.414243 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1d3a481c-e450-4d5b-a5a2-54eac6642566" (UID: "1d3a481c-e450-4d5b-a5a2-54eac6642566"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.414299 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1d3a481c-e450-4d5b-a5a2-54eac6642566" (UID: "1d3a481c-e450-4d5b-a5a2-54eac6642566"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.414880 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-config" (OuterVolumeSpecName: "console-config") pod "1d3a481c-e450-4d5b-a5a2-54eac6642566" (UID: "1d3a481c-e450-4d5b-a5a2-54eac6642566"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.414887 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3a481c-e450-4d5b-a5a2-54eac6642566-kube-api-access-hp5lz" (OuterVolumeSpecName: "kube-api-access-hp5lz") pod "1d3a481c-e450-4d5b-a5a2-54eac6642566" (UID: "1d3a481c-e450-4d5b-a5a2-54eac6642566"). InnerVolumeSpecName "kube-api-access-hp5lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.417362 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1d3a481c-e450-4d5b-a5a2-54eac6642566" (UID: "1d3a481c-e450-4d5b-a5a2-54eac6642566"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.418506 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-service-ca" (OuterVolumeSpecName: "service-ca") pod "1d3a481c-e450-4d5b-a5a2-54eac6642566" (UID: "1d3a481c-e450-4d5b-a5a2-54eac6642566"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.419597 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1d3a481c-e450-4d5b-a5a2-54eac6642566" (UID: "1d3a481c-e450-4d5b-a5a2-54eac6642566"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.421601 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-create-xpsml"] Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.433405 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-xpsml" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.441084 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-xpsml"] Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.511402 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d24632ef-0280-4438-91ba-0a50b7495837-operator-scripts\") pod \"keystone-29ba-account-create-update-7lxtx\" (UID: \"d24632ef-0280-4438-91ba-0a50b7495837\") " pod="watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.511466 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a60c924e-e444-400c-b269-3bdc3a931300-operator-scripts\") pod \"keystone-db-create-xpsml\" (UID: \"a60c924e-e444-400c-b269-3bdc3a931300\") " pod="watcher-kuttl-default/keystone-db-create-xpsml" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.511515 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpgzh\" (UniqueName: \"kubernetes.io/projected/a60c924e-e444-400c-b269-3bdc3a931300-kube-api-access-bpgzh\") pod \"keystone-db-create-xpsml\" (UID: \"a60c924e-e444-400c-b269-3bdc3a931300\") " pod="watcher-kuttl-default/keystone-db-create-xpsml" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.511697 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ckf8\" (UniqueName: \"kubernetes.io/projected/d24632ef-0280-4438-91ba-0a50b7495837-kube-api-access-6ckf8\") pod \"keystone-29ba-account-create-update-7lxtx\" (UID: \"d24632ef-0280-4438-91ba-0a50b7495837\") " pod="watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.511885 4814 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.511903 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.511915 4814 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.511925 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp5lz\" (UniqueName: \"kubernetes.io/projected/1d3a481c-e450-4d5b-a5a2-54eac6642566-kube-api-access-hp5lz\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.511934 4814 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.511941 4814 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.511950 4814 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d3a481c-e450-4d5b-a5a2-54eac6642566-console-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.512206 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d24632ef-0280-4438-91ba-0a50b7495837-operator-scripts\") pod \"keystone-29ba-account-create-update-7lxtx\" (UID: \"d24632ef-0280-4438-91ba-0a50b7495837\") " pod="watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.531442 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ckf8\" (UniqueName: \"kubernetes.io/projected/d24632ef-0280-4438-91ba-0a50b7495837-kube-api-access-6ckf8\") pod \"keystone-29ba-account-create-update-7lxtx\" (UID: \"d24632ef-0280-4438-91ba-0a50b7495837\") " pod="watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.613155 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a60c924e-e444-400c-b269-3bdc3a931300-operator-scripts\") pod \"keystone-db-create-xpsml\" (UID: \"a60c924e-e444-400c-b269-3bdc3a931300\") " pod="watcher-kuttl-default/keystone-db-create-xpsml" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.613212 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpgzh\" (UniqueName: \"kubernetes.io/projected/a60c924e-e444-400c-b269-3bdc3a931300-kube-api-access-bpgzh\") pod \"keystone-db-create-xpsml\" (UID: \"a60c924e-e444-400c-b269-3bdc3a931300\") " pod="watcher-kuttl-default/keystone-db-create-xpsml" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.614340 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a60c924e-e444-400c-b269-3bdc3a931300-operator-scripts\") pod \"keystone-db-create-xpsml\" (UID: \"a60c924e-e444-400c-b269-3bdc3a931300\") " pod="watcher-kuttl-default/keystone-db-create-xpsml" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.631728 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpgzh\" (UniqueName: \"kubernetes.io/projected/a60c924e-e444-400c-b269-3bdc3a931300-kube-api-access-bpgzh\") pod \"keystone-db-create-xpsml\" (UID: \"a60c924e-e444-400c-b269-3bdc3a931300\") " pod="watcher-kuttl-default/keystone-db-create-xpsml" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.648586 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.759483 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-xpsml" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.849567 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"599f7de4-aead-4401-a25c-59c97b390339","Type":"ContainerStarted","Data":"5f8053cf11622ebc6f3b75bb7d1d35f81ef78ac5845fa3c75014db91d3d93631"} Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.855935 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7f988dc77-nvn6q_1d3a481c-e450-4d5b-a5a2-54eac6642566/console/0.log" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.855975 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f988dc77-nvn6q" event={"ID":"1d3a481c-e450-4d5b-a5a2-54eac6642566","Type":"ContainerDied","Data":"4899c59757c53087c153920ab661c614d78f49c45e65824482e9e07dc3fc8b90"} Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.856056 4814 scope.go:117] "RemoveContainer" containerID="151c14fe5549b053799dadc161fbd4771d2a94df0d80d0cad069556cf76636ff" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.856094 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f988dc77-nvn6q" Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.947629 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7f988dc77-nvn6q"] Dec 02 14:47:12 crc kubenswrapper[4814]: I1202 14:47:12.955728 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7f988dc77-nvn6q"] Dec 02 14:47:13 crc kubenswrapper[4814]: I1202 14:47:13.142598 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx"] Dec 02 14:47:13 crc kubenswrapper[4814]: I1202 14:47:13.203248 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-xpsml"] Dec 02 14:47:13 crc kubenswrapper[4814]: I1202 14:47:13.865634 4814 generic.go:334] "Generic (PLEG): container finished" podID="d24632ef-0280-4438-91ba-0a50b7495837" containerID="9912f23dabf512cbdf27b5ca6bb44dfe6347d9cfa418606e46674ece9bd65489" exitCode=0 Dec 02 14:47:13 crc kubenswrapper[4814]: I1202 14:47:13.865920 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx" event={"ID":"d24632ef-0280-4438-91ba-0a50b7495837","Type":"ContainerDied","Data":"9912f23dabf512cbdf27b5ca6bb44dfe6347d9cfa418606e46674ece9bd65489"} Dec 02 14:47:13 crc kubenswrapper[4814]: I1202 14:47:13.866042 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx" event={"ID":"d24632ef-0280-4438-91ba-0a50b7495837","Type":"ContainerStarted","Data":"a264f7e0b9ad29b8031646756e5443e5baa45b1efd0e62c02242615356c3c7cf"} Dec 02 14:47:13 crc kubenswrapper[4814]: I1202 14:47:13.869493 4814 generic.go:334] "Generic (PLEG): container finished" podID="a60c924e-e444-400c-b269-3bdc3a931300" containerID="a0efff47d8a70af96abcd4cf592b2366c05f1e66bb3bfc9cb751652dcdf6c084" exitCode=0 Dec 02 14:47:13 crc kubenswrapper[4814]: I1202 14:47:13.869522 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-xpsml" event={"ID":"a60c924e-e444-400c-b269-3bdc3a931300","Type":"ContainerDied","Data":"a0efff47d8a70af96abcd4cf592b2366c05f1e66bb3bfc9cb751652dcdf6c084"} Dec 02 14:47:13 crc kubenswrapper[4814]: I1202 14:47:13.869564 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-xpsml" event={"ID":"a60c924e-e444-400c-b269-3bdc3a931300","Type":"ContainerStarted","Data":"b08360e6e92467c99eef52cba1006faf3ad9b1e8a2e831a399706e572c7716b7"} Dec 02 14:47:14 crc kubenswrapper[4814]: I1202 14:47:14.153724 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3a481c-e450-4d5b-a5a2-54eac6642566" path="/var/lib/kubelet/pods/1d3a481c-e450-4d5b-a5a2-54eac6642566/volumes" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.260994 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-xpsml" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.271883 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.352995 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpgzh\" (UniqueName: \"kubernetes.io/projected/a60c924e-e444-400c-b269-3bdc3a931300-kube-api-access-bpgzh\") pod \"a60c924e-e444-400c-b269-3bdc3a931300\" (UID: \"a60c924e-e444-400c-b269-3bdc3a931300\") " Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.353065 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d24632ef-0280-4438-91ba-0a50b7495837-operator-scripts\") pod \"d24632ef-0280-4438-91ba-0a50b7495837\" (UID: \"d24632ef-0280-4438-91ba-0a50b7495837\") " Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.353150 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ckf8\" (UniqueName: \"kubernetes.io/projected/d24632ef-0280-4438-91ba-0a50b7495837-kube-api-access-6ckf8\") pod \"d24632ef-0280-4438-91ba-0a50b7495837\" (UID: \"d24632ef-0280-4438-91ba-0a50b7495837\") " Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.353353 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a60c924e-e444-400c-b269-3bdc3a931300-operator-scripts\") pod \"a60c924e-e444-400c-b269-3bdc3a931300\" (UID: \"a60c924e-e444-400c-b269-3bdc3a931300\") " Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.353810 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d24632ef-0280-4438-91ba-0a50b7495837-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d24632ef-0280-4438-91ba-0a50b7495837" (UID: "d24632ef-0280-4438-91ba-0a50b7495837"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.354151 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a60c924e-e444-400c-b269-3bdc3a931300-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a60c924e-e444-400c-b269-3bdc3a931300" (UID: "a60c924e-e444-400c-b269-3bdc3a931300"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.359181 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a60c924e-e444-400c-b269-3bdc3a931300-kube-api-access-bpgzh" (OuterVolumeSpecName: "kube-api-access-bpgzh") pod "a60c924e-e444-400c-b269-3bdc3a931300" (UID: "a60c924e-e444-400c-b269-3bdc3a931300"). InnerVolumeSpecName "kube-api-access-bpgzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.367495 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d24632ef-0280-4438-91ba-0a50b7495837-kube-api-access-6ckf8" (OuterVolumeSpecName: "kube-api-access-6ckf8") pod "d24632ef-0280-4438-91ba-0a50b7495837" (UID: "d24632ef-0280-4438-91ba-0a50b7495837"). InnerVolumeSpecName "kube-api-access-6ckf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.454755 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ckf8\" (UniqueName: \"kubernetes.io/projected/d24632ef-0280-4438-91ba-0a50b7495837-kube-api-access-6ckf8\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.454788 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a60c924e-e444-400c-b269-3bdc3a931300-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.454797 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpgzh\" (UniqueName: \"kubernetes.io/projected/a60c924e-e444-400c-b269-3bdc3a931300-kube-api-access-bpgzh\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.454806 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d24632ef-0280-4438-91ba-0a50b7495837-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.888288 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx" event={"ID":"d24632ef-0280-4438-91ba-0a50b7495837","Type":"ContainerDied","Data":"a264f7e0b9ad29b8031646756e5443e5baa45b1efd0e62c02242615356c3c7cf"} Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.888308 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.888327 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a264f7e0b9ad29b8031646756e5443e5baa45b1efd0e62c02242615356c3c7cf" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.889940 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-xpsml" event={"ID":"a60c924e-e444-400c-b269-3bdc3a931300","Type":"ContainerDied","Data":"b08360e6e92467c99eef52cba1006faf3ad9b1e8a2e831a399706e572c7716b7"} Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.890011 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-xpsml" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.889962 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b08360e6e92467c99eef52cba1006faf3ad9b1e8a2e831a399706e572c7716b7" Dec 02 14:47:22 crc kubenswrapper[4814]: I1202 14:47:15.893086 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"599f7de4-aead-4401-a25c-59c97b390339","Type":"ContainerStarted","Data":"b2679b6a8e5f0be26220cd2754b438c28314e9cf01b96b3b5c0d2c782c14ce5e"} Dec 02 14:47:23 crc kubenswrapper[4814]: I1202 14:47:23.963844 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"599f7de4-aead-4401-a25c-59c97b390339","Type":"ContainerStarted","Data":"31ee39cd662d05ad5d187ce72478f65d41b615615f2d1d401acf22adfcbdf0b0"} Dec 02 14:47:24 crc kubenswrapper[4814]: I1202 14:47:24.996686 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:25 crc kubenswrapper[4814]: I1202 14:47:25.985069 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"46b61b10-de07-41cf-9fc0-03b5fc067b56","Type":"ContainerDied","Data":"02ae3b2db13001840796c24fc81555b41ac7da3cef420dac906b08da9f5b2fd9"} Dec 02 14:47:25 crc kubenswrapper[4814]: I1202 14:47:25.984965 4814 generic.go:334] "Generic (PLEG): container finished" podID="46b61b10-de07-41cf-9fc0-03b5fc067b56" containerID="02ae3b2db13001840796c24fc81555b41ac7da3cef420dac906b08da9f5b2fd9" exitCode=0 Dec 02 14:47:25 crc kubenswrapper[4814]: I1202 14:47:25.992333 4814 generic.go:334] "Generic (PLEG): container finished" podID="7504d82f-f8f4-4784-ae11-85688085e352" containerID="2686039f5634379aafc9351a508e5ae79563f801244d4ea6c033e6901453b4ab" exitCode=0 Dec 02 14:47:25 crc kubenswrapper[4814]: I1202 14:47:25.992409 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"7504d82f-f8f4-4784-ae11-85688085e352","Type":"ContainerDied","Data":"2686039f5634379aafc9351a508e5ae79563f801244d4ea6c033e6901453b4ab"} Dec 02 14:47:26 crc kubenswrapper[4814]: I1202 14:47:26.032631 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=6.493569875 podStartE2EDuration="54.032607995s" podCreationTimestamp="2025-12-02 14:46:32 +0000 UTC" firstStartedPulling="2025-12-02 14:46:35.751840609 +0000 UTC m=+1057.876202813" lastFinishedPulling="2025-12-02 14:47:23.290878729 +0000 UTC m=+1105.415240933" observedRunningTime="2025-12-02 14:47:24.00555184 +0000 UTC m=+1106.129914114" watchObservedRunningTime="2025-12-02 14:47:26.032607995 +0000 UTC m=+1108.156970239" Dec 02 14:47:27 crc kubenswrapper[4814]: I1202 14:47:27.003564 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"46b61b10-de07-41cf-9fc0-03b5fc067b56","Type":"ContainerStarted","Data":"18dbcfd025b45090a0168c9425969780c3249d7db10b8deb1e253d027807548e"} Dec 02 14:47:27 crc kubenswrapper[4814]: I1202 14:47:27.004194 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:47:27 crc kubenswrapper[4814]: I1202 14:47:27.005996 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"7504d82f-f8f4-4784-ae11-85688085e352","Type":"ContainerStarted","Data":"5326df8ac8a95c935e11b32fc8f4d511b0362eef7f639bc462c583f55c504855"} Dec 02 14:47:27 crc kubenswrapper[4814]: I1202 14:47:27.006190 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:47:27 crc kubenswrapper[4814]: I1202 14:47:27.039201 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podStartSLOduration=38.693657555 podStartE2EDuration="59.0391855s" podCreationTimestamp="2025-12-02 14:46:28 +0000 UTC" firstStartedPulling="2025-12-02 14:46:30.661945364 +0000 UTC m=+1052.786307568" lastFinishedPulling="2025-12-02 14:46:51.007473299 +0000 UTC m=+1073.131835513" observedRunningTime="2025-12-02 14:47:27.032615776 +0000 UTC m=+1109.156977980" watchObservedRunningTime="2025-12-02 14:47:27.0391855 +0000 UTC m=+1109.163547704" Dec 02 14:47:27 crc kubenswrapper[4814]: I1202 14:47:27.068320 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-server-0" podStartSLOduration=38.488928957 podStartE2EDuration="58.068302276s" podCreationTimestamp="2025-12-02 14:46:29 +0000 UTC" firstStartedPulling="2025-12-02 14:46:31.426794157 +0000 UTC m=+1053.551156361" lastFinishedPulling="2025-12-02 14:46:51.006167446 +0000 UTC m=+1073.130529680" observedRunningTime="2025-12-02 14:47:27.060128102 +0000 UTC m=+1109.184490316" watchObservedRunningTime="2025-12-02 14:47:27.068302276 +0000 UTC m=+1109.192664480" Dec 02 14:47:34 crc kubenswrapper[4814]: I1202 14:47:34.996633 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:35 crc kubenswrapper[4814]: I1202 14:47:35.004218 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:35 crc kubenswrapper[4814]: I1202 14:47:35.105312 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:37 crc kubenswrapper[4814]: I1202 14:47:37.561811 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 14:47:37 crc kubenswrapper[4814]: I1202 14:47:37.562522 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="599f7de4-aead-4401-a25c-59c97b390339" containerName="prometheus" containerID="cri-o://5f8053cf11622ebc6f3b75bb7d1d35f81ef78ac5845fa3c75014db91d3d93631" gracePeriod=600 Dec 02 14:47:37 crc kubenswrapper[4814]: I1202 14:47:37.562689 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="599f7de4-aead-4401-a25c-59c97b390339" containerName="config-reloader" containerID="cri-o://b2679b6a8e5f0be26220cd2754b438c28314e9cf01b96b3b5c0d2c782c14ce5e" gracePeriod=600 Dec 02 14:47:37 crc kubenswrapper[4814]: I1202 14:47:37.562810 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="599f7de4-aead-4401-a25c-59c97b390339" containerName="thanos-sidecar" containerID="cri-o://31ee39cd662d05ad5d187ce72478f65d41b615615f2d1d401acf22adfcbdf0b0" gracePeriod=600 Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.125358 4814 generic.go:334] "Generic (PLEG): container finished" podID="599f7de4-aead-4401-a25c-59c97b390339" containerID="31ee39cd662d05ad5d187ce72478f65d41b615615f2d1d401acf22adfcbdf0b0" exitCode=0 Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.125388 4814 generic.go:334] "Generic (PLEG): container finished" podID="599f7de4-aead-4401-a25c-59c97b390339" containerID="b2679b6a8e5f0be26220cd2754b438c28314e9cf01b96b3b5c0d2c782c14ce5e" exitCode=0 Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.125397 4814 generic.go:334] "Generic (PLEG): container finished" podID="599f7de4-aead-4401-a25c-59c97b390339" containerID="5f8053cf11622ebc6f3b75bb7d1d35f81ef78ac5845fa3c75014db91d3d93631" exitCode=0 Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.125415 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"599f7de4-aead-4401-a25c-59c97b390339","Type":"ContainerDied","Data":"31ee39cd662d05ad5d187ce72478f65d41b615615f2d1d401acf22adfcbdf0b0"} Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.125438 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"599f7de4-aead-4401-a25c-59c97b390339","Type":"ContainerDied","Data":"b2679b6a8e5f0be26220cd2754b438c28314e9cf01b96b3b5c0d2c782c14ce5e"} Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.125447 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"599f7de4-aead-4401-a25c-59c97b390339","Type":"ContainerDied","Data":"5f8053cf11622ebc6f3b75bb7d1d35f81ef78ac5845fa3c75014db91d3d93631"} Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.515484 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.644606 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-web-config\") pod \"599f7de4-aead-4401-a25c-59c97b390339\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.644675 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/599f7de4-aead-4401-a25c-59c97b390339-config-out\") pod \"599f7de4-aead-4401-a25c-59c97b390339\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.645039 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\") pod \"599f7de4-aead-4401-a25c-59c97b390339\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.645191 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzvnf\" (UniqueName: \"kubernetes.io/projected/599f7de4-aead-4401-a25c-59c97b390339-kube-api-access-xzvnf\") pod \"599f7de4-aead-4401-a25c-59c97b390339\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.645256 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-config\") pod \"599f7de4-aead-4401-a25c-59c97b390339\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.645302 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/599f7de4-aead-4401-a25c-59c97b390339-tls-assets\") pod \"599f7de4-aead-4401-a25c-59c97b390339\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.645340 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-thanos-prometheus-http-client-file\") pod \"599f7de4-aead-4401-a25c-59c97b390339\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.645375 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/599f7de4-aead-4401-a25c-59c97b390339-prometheus-metric-storage-rulefiles-0\") pod \"599f7de4-aead-4401-a25c-59c97b390339\" (UID: \"599f7de4-aead-4401-a25c-59c97b390339\") " Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.646149 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/599f7de4-aead-4401-a25c-59c97b390339-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "599f7de4-aead-4401-a25c-59c97b390339" (UID: "599f7de4-aead-4401-a25c-59c97b390339"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.654417 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "599f7de4-aead-4401-a25c-59c97b390339" (UID: "599f7de4-aead-4401-a25c-59c97b390339"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.654457 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/599f7de4-aead-4401-a25c-59c97b390339-config-out" (OuterVolumeSpecName: "config-out") pod "599f7de4-aead-4401-a25c-59c97b390339" (UID: "599f7de4-aead-4401-a25c-59c97b390339"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.654449 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-config" (OuterVolumeSpecName: "config") pod "599f7de4-aead-4401-a25c-59c97b390339" (UID: "599f7de4-aead-4401-a25c-59c97b390339"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.654591 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/599f7de4-aead-4401-a25c-59c97b390339-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "599f7de4-aead-4401-a25c-59c97b390339" (UID: "599f7de4-aead-4401-a25c-59c97b390339"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.654658 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/599f7de4-aead-4401-a25c-59c97b390339-kube-api-access-xzvnf" (OuterVolumeSpecName: "kube-api-access-xzvnf") pod "599f7de4-aead-4401-a25c-59c97b390339" (UID: "599f7de4-aead-4401-a25c-59c97b390339"). InnerVolumeSpecName "kube-api-access-xzvnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.668132 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "599f7de4-aead-4401-a25c-59c97b390339" (UID: "599f7de4-aead-4401-a25c-59c97b390339"). InnerVolumeSpecName "pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.675417 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-web-config" (OuterVolumeSpecName: "web-config") pod "599f7de4-aead-4401-a25c-59c97b390339" (UID: "599f7de4-aead-4401-a25c-59c97b390339"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.747120 4814 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-web-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.747411 4814 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/599f7de4-aead-4401-a25c-59c97b390339-config-out\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.747526 4814 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\") on node \"crc\" " Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.747623 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzvnf\" (UniqueName: \"kubernetes.io/projected/599f7de4-aead-4401-a25c-59c97b390339-kube-api-access-xzvnf\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.747711 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.747783 4814 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/599f7de4-aead-4401-a25c-59c97b390339-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.747858 4814 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/599f7de4-aead-4401-a25c-59c97b390339-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.747934 4814 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/599f7de4-aead-4401-a25c-59c97b390339-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.772440 4814 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.772874 4814 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258") on node "crc" Dec 02 14:47:38 crc kubenswrapper[4814]: I1202 14:47:38.849264 4814 reconciler_common.go:293] "Volume detached for volume \"pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.136002 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"599f7de4-aead-4401-a25c-59c97b390339","Type":"ContainerDied","Data":"e5fb1d1cfc4aa31e0967797157df86a11324905c515e6802e073d4297a54cb5e"} Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.136311 4814 scope.go:117] "RemoveContainer" containerID="31ee39cd662d05ad5d187ce72478f65d41b615615f2d1d401acf22adfcbdf0b0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.136324 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.161680 4814 scope.go:117] "RemoveContainer" containerID="b2679b6a8e5f0be26220cd2754b438c28314e9cf01b96b3b5c0d2c782c14ce5e" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.173607 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.182668 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.207309 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 14:47:39 crc kubenswrapper[4814]: E1202 14:47:39.207751 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599f7de4-aead-4401-a25c-59c97b390339" containerName="init-config-reloader" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.207767 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="599f7de4-aead-4401-a25c-59c97b390339" containerName="init-config-reloader" Dec 02 14:47:39 crc kubenswrapper[4814]: E1202 14:47:39.207781 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60c924e-e444-400c-b269-3bdc3a931300" containerName="mariadb-database-create" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.207790 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60c924e-e444-400c-b269-3bdc3a931300" containerName="mariadb-database-create" Dec 02 14:47:39 crc kubenswrapper[4814]: E1202 14:47:39.207803 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d24632ef-0280-4438-91ba-0a50b7495837" containerName="mariadb-account-create-update" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.207814 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d24632ef-0280-4438-91ba-0a50b7495837" containerName="mariadb-account-create-update" Dec 02 14:47:39 crc kubenswrapper[4814]: E1202 14:47:39.207839 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599f7de4-aead-4401-a25c-59c97b390339" containerName="prometheus" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.207847 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="599f7de4-aead-4401-a25c-59c97b390339" containerName="prometheus" Dec 02 14:47:39 crc kubenswrapper[4814]: E1202 14:47:39.207868 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599f7de4-aead-4401-a25c-59c97b390339" containerName="thanos-sidecar" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.207877 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="599f7de4-aead-4401-a25c-59c97b390339" containerName="thanos-sidecar" Dec 02 14:47:39 crc kubenswrapper[4814]: E1202 14:47:39.207889 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599f7de4-aead-4401-a25c-59c97b390339" containerName="config-reloader" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.207897 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="599f7de4-aead-4401-a25c-59c97b390339" containerName="config-reloader" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.208119 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a60c924e-e444-400c-b269-3bdc3a931300" containerName="mariadb-database-create" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.208139 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="599f7de4-aead-4401-a25c-59c97b390339" containerName="config-reloader" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.208157 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="599f7de4-aead-4401-a25c-59c97b390339" containerName="prometheus" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.208174 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="599f7de4-aead-4401-a25c-59c97b390339" containerName="thanos-sidecar" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.208183 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d24632ef-0280-4438-91ba-0a50b7495837" containerName="mariadb-account-create-update" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.210149 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.214189 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.214811 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.223998 4814 scope.go:117] "RemoveContainer" containerID="5f8053cf11622ebc6f3b75bb7d1d35f81ef78ac5845fa3c75014db91d3d93631" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.225025 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.226317 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-88898" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.226789 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.226376 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-metric-storage-prometheus-svc" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.232788 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.250548 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.271438 4814 scope.go:117] "RemoveContainer" containerID="11d0555eacf719819342f91488a7aa1e098a9790ac9a67d4738a73e3f29dda6f" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.357925 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.357968 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.357997 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.358015 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f6a3a77e-84b6-4f45-9666-9b295adabe90-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.358100 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f6a3a77e-84b6-4f45-9666-9b295adabe90-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.358186 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f6a3a77e-84b6-4f45-9666-9b295adabe90-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.358309 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.358350 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5h7f\" (UniqueName: \"kubernetes.io/projected/f6a3a77e-84b6-4f45-9666-9b295adabe90-kube-api-access-m5h7f\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.358507 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.358544 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.358561 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-config\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.460492 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5h7f\" (UniqueName: \"kubernetes.io/projected/f6a3a77e-84b6-4f45-9666-9b295adabe90-kube-api-access-m5h7f\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.460616 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.460654 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-config\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.460678 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.460714 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.460741 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.461656 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f6a3a77e-84b6-4f45-9666-9b295adabe90-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.461747 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.461831 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f6a3a77e-84b6-4f45-9666-9b295adabe90-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.461933 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f6a3a77e-84b6-4f45-9666-9b295adabe90-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.462116 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.463762 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f6a3a77e-84b6-4f45-9666-9b295adabe90-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.466860 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.466968 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-config\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.468082 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f6a3a77e-84b6-4f45-9666-9b295adabe90-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.468096 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.470208 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.471065 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f6a3a77e-84b6-4f45-9666-9b295adabe90-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.471355 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.471446 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f6a3a77e-84b6-4f45-9666-9b295adabe90-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.471541 4814 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.471588 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c53988640793b331e909f437f491a3f424308ec4a276ce6541ad4095c8223a9e/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.504272 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5h7f\" (UniqueName: \"kubernetes.io/projected/f6a3a77e-84b6-4f45-9666-9b295adabe90-kube-api-access-m5h7f\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.531945 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c4b4856-6ca9-49f1-8a57-957922dbb258\") pod \"prometheus-metric-storage-0\" (UID: \"f6a3a77e-84b6-4f45-9666-9b295adabe90\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:39 crc kubenswrapper[4814]: I1202 14:47:39.542037 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:40 crc kubenswrapper[4814]: I1202 14:47:40.136279 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 14:47:40 crc kubenswrapper[4814]: I1202 14:47:40.178014 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="599f7de4-aead-4401-a25c-59c97b390339" path="/var/lib/kubelet/pods/599f7de4-aead-4401-a25c-59c97b390339/volumes" Dec 02 14:47:40 crc kubenswrapper[4814]: I1202 14:47:40.179069 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 14:47:40 crc kubenswrapper[4814]: I1202 14:47:40.862293 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 14:47:41 crc kubenswrapper[4814]: I1202 14:47:41.160894 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"f6a3a77e-84b6-4f45-9666-9b295adabe90","Type":"ContainerStarted","Data":"01f05716e9fb1484f48554da3fc7d40de79fedd977fe9dfdb2ee42134df83044"} Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.445521 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-sync-76pvq"] Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.447279 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-76pvq" Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.450161 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.450161 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.451246 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.457823 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-76pvq"] Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.460355 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-d2qcr" Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.507326 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c600570c-ddbe-48ab-84d2-7acb9d546f01-combined-ca-bundle\") pod \"keystone-db-sync-76pvq\" (UID: \"c600570c-ddbe-48ab-84d2-7acb9d546f01\") " pod="watcher-kuttl-default/keystone-db-sync-76pvq" Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.507381 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gppzb\" (UniqueName: \"kubernetes.io/projected/c600570c-ddbe-48ab-84d2-7acb9d546f01-kube-api-access-gppzb\") pod \"keystone-db-sync-76pvq\" (UID: \"c600570c-ddbe-48ab-84d2-7acb9d546f01\") " pod="watcher-kuttl-default/keystone-db-sync-76pvq" Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.507412 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c600570c-ddbe-48ab-84d2-7acb9d546f01-config-data\") pod \"keystone-db-sync-76pvq\" (UID: \"c600570c-ddbe-48ab-84d2-7acb9d546f01\") " pod="watcher-kuttl-default/keystone-db-sync-76pvq" Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.608719 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c600570c-ddbe-48ab-84d2-7acb9d546f01-combined-ca-bundle\") pod \"keystone-db-sync-76pvq\" (UID: \"c600570c-ddbe-48ab-84d2-7acb9d546f01\") " pod="watcher-kuttl-default/keystone-db-sync-76pvq" Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.608788 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gppzb\" (UniqueName: \"kubernetes.io/projected/c600570c-ddbe-48ab-84d2-7acb9d546f01-kube-api-access-gppzb\") pod \"keystone-db-sync-76pvq\" (UID: \"c600570c-ddbe-48ab-84d2-7acb9d546f01\") " pod="watcher-kuttl-default/keystone-db-sync-76pvq" Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.608821 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c600570c-ddbe-48ab-84d2-7acb9d546f01-config-data\") pod \"keystone-db-sync-76pvq\" (UID: \"c600570c-ddbe-48ab-84d2-7acb9d546f01\") " pod="watcher-kuttl-default/keystone-db-sync-76pvq" Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.614629 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c600570c-ddbe-48ab-84d2-7acb9d546f01-config-data\") pod \"keystone-db-sync-76pvq\" (UID: \"c600570c-ddbe-48ab-84d2-7acb9d546f01\") " pod="watcher-kuttl-default/keystone-db-sync-76pvq" Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.615175 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c600570c-ddbe-48ab-84d2-7acb9d546f01-combined-ca-bundle\") pod \"keystone-db-sync-76pvq\" (UID: \"c600570c-ddbe-48ab-84d2-7acb9d546f01\") " pod="watcher-kuttl-default/keystone-db-sync-76pvq" Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.647638 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gppzb\" (UniqueName: \"kubernetes.io/projected/c600570c-ddbe-48ab-84d2-7acb9d546f01-kube-api-access-gppzb\") pod \"keystone-db-sync-76pvq\" (UID: \"c600570c-ddbe-48ab-84d2-7acb9d546f01\") " pod="watcher-kuttl-default/keystone-db-sync-76pvq" Dec 02 14:47:42 crc kubenswrapper[4814]: I1202 14:47:42.765591 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-76pvq" Dec 02 14:47:43 crc kubenswrapper[4814]: I1202 14:47:43.178087 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"f6a3a77e-84b6-4f45-9666-9b295adabe90","Type":"ContainerStarted","Data":"4ea1069a4b8a5b8fbc2463a1fce51b395dd39fd8a9620fdd3379d8c3ead20ca3"} Dec 02 14:47:43 crc kubenswrapper[4814]: I1202 14:47:43.434736 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-76pvq"] Dec 02 14:47:44 crc kubenswrapper[4814]: I1202 14:47:44.185706 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-76pvq" event={"ID":"c600570c-ddbe-48ab-84d2-7acb9d546f01","Type":"ContainerStarted","Data":"c55acd2e3028b02a0fbbd8ea0d3e381b9bf522a984b8b4355f0f14a83b7f0180"} Dec 02 14:47:49 crc kubenswrapper[4814]: I1202 14:47:49.231564 4814 generic.go:334] "Generic (PLEG): container finished" podID="f6a3a77e-84b6-4f45-9666-9b295adabe90" containerID="4ea1069a4b8a5b8fbc2463a1fce51b395dd39fd8a9620fdd3379d8c3ead20ca3" exitCode=0 Dec 02 14:47:49 crc kubenswrapper[4814]: I1202 14:47:49.231652 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"f6a3a77e-84b6-4f45-9666-9b295adabe90","Type":"ContainerDied","Data":"4ea1069a4b8a5b8fbc2463a1fce51b395dd39fd8a9620fdd3379d8c3ead20ca3"} Dec 02 14:47:50 crc kubenswrapper[4814]: I1202 14:47:50.651061 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:47:50 crc kubenswrapper[4814]: I1202 14:47:50.651360 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:47:52 crc kubenswrapper[4814]: I1202 14:47:52.256013 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-76pvq" event={"ID":"c600570c-ddbe-48ab-84d2-7acb9d546f01","Type":"ContainerStarted","Data":"4e896c31d4830d795391b60001c67a4254747c2cd5c23b757d86e95e5fabb70e"} Dec 02 14:47:52 crc kubenswrapper[4814]: I1202 14:47:52.258921 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"f6a3a77e-84b6-4f45-9666-9b295adabe90","Type":"ContainerStarted","Data":"1cf453e88934dc879e2913fbd2bf4922b81d0e1c186eb69d6e72ac0d90fb4814"} Dec 02 14:47:52 crc kubenswrapper[4814]: I1202 14:47:52.280445 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-db-sync-76pvq" podStartSLOduration=2.558951322 podStartE2EDuration="10.280425944s" podCreationTimestamp="2025-12-02 14:47:42 +0000 UTC" firstStartedPulling="2025-12-02 14:47:43.442897195 +0000 UTC m=+1125.567259399" lastFinishedPulling="2025-12-02 14:47:51.164371817 +0000 UTC m=+1133.288734021" observedRunningTime="2025-12-02 14:47:52.274799493 +0000 UTC m=+1134.399161697" watchObservedRunningTime="2025-12-02 14:47:52.280425944 +0000 UTC m=+1134.404788148" Dec 02 14:47:54 crc kubenswrapper[4814]: I1202 14:47:54.279199 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"f6a3a77e-84b6-4f45-9666-9b295adabe90","Type":"ContainerStarted","Data":"00e47f8575a90ebd04ac15ddd2fdb3d803da9302431d4ed18a8a9dc47d6786bc"} Dec 02 14:47:54 crc kubenswrapper[4814]: I1202 14:47:54.279476 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"f6a3a77e-84b6-4f45-9666-9b295adabe90","Type":"ContainerStarted","Data":"80c2c78a632dee2b4bd309af4023b28548197a98bb2fa59eaa4905a56d2a4ec7"} Dec 02 14:47:54 crc kubenswrapper[4814]: I1202 14:47:54.309925 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=15.30989882 podStartE2EDuration="15.30989882s" podCreationTimestamp="2025-12-02 14:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:47:54.303119331 +0000 UTC m=+1136.427481555" watchObservedRunningTime="2025-12-02 14:47:54.30989882 +0000 UTC m=+1136.434261034" Dec 02 14:47:54 crc kubenswrapper[4814]: I1202 14:47:54.560502 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:54 crc kubenswrapper[4814]: I1202 14:47:54.560591 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:54 crc kubenswrapper[4814]: I1202 14:47:54.568142 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:55 crc kubenswrapper[4814]: I1202 14:47:55.290232 4814 generic.go:334] "Generic (PLEG): container finished" podID="c600570c-ddbe-48ab-84d2-7acb9d546f01" containerID="4e896c31d4830d795391b60001c67a4254747c2cd5c23b757d86e95e5fabb70e" exitCode=0 Dec 02 14:47:55 crc kubenswrapper[4814]: I1202 14:47:55.290303 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-76pvq" event={"ID":"c600570c-ddbe-48ab-84d2-7acb9d546f01","Type":"ContainerDied","Data":"4e896c31d4830d795391b60001c67a4254747c2cd5c23b757d86e95e5fabb70e"} Dec 02 14:47:55 crc kubenswrapper[4814]: I1202 14:47:55.295345 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 14:47:56 crc kubenswrapper[4814]: I1202 14:47:56.598371 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-76pvq" Dec 02 14:47:56 crc kubenswrapper[4814]: I1202 14:47:56.745626 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gppzb\" (UniqueName: \"kubernetes.io/projected/c600570c-ddbe-48ab-84d2-7acb9d546f01-kube-api-access-gppzb\") pod \"c600570c-ddbe-48ab-84d2-7acb9d546f01\" (UID: \"c600570c-ddbe-48ab-84d2-7acb9d546f01\") " Dec 02 14:47:56 crc kubenswrapper[4814]: I1202 14:47:56.745820 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c600570c-ddbe-48ab-84d2-7acb9d546f01-combined-ca-bundle\") pod \"c600570c-ddbe-48ab-84d2-7acb9d546f01\" (UID: \"c600570c-ddbe-48ab-84d2-7acb9d546f01\") " Dec 02 14:47:56 crc kubenswrapper[4814]: I1202 14:47:56.745865 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c600570c-ddbe-48ab-84d2-7acb9d546f01-config-data\") pod \"c600570c-ddbe-48ab-84d2-7acb9d546f01\" (UID: \"c600570c-ddbe-48ab-84d2-7acb9d546f01\") " Dec 02 14:47:56 crc kubenswrapper[4814]: I1202 14:47:56.752299 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c600570c-ddbe-48ab-84d2-7acb9d546f01-kube-api-access-gppzb" (OuterVolumeSpecName: "kube-api-access-gppzb") pod "c600570c-ddbe-48ab-84d2-7acb9d546f01" (UID: "c600570c-ddbe-48ab-84d2-7acb9d546f01"). InnerVolumeSpecName "kube-api-access-gppzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:47:56 crc kubenswrapper[4814]: I1202 14:47:56.772142 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c600570c-ddbe-48ab-84d2-7acb9d546f01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c600570c-ddbe-48ab-84d2-7acb9d546f01" (UID: "c600570c-ddbe-48ab-84d2-7acb9d546f01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:47:56 crc kubenswrapper[4814]: I1202 14:47:56.809481 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c600570c-ddbe-48ab-84d2-7acb9d546f01-config-data" (OuterVolumeSpecName: "config-data") pod "c600570c-ddbe-48ab-84d2-7acb9d546f01" (UID: "c600570c-ddbe-48ab-84d2-7acb9d546f01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:47:56 crc kubenswrapper[4814]: I1202 14:47:56.847909 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c600570c-ddbe-48ab-84d2-7acb9d546f01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:56 crc kubenswrapper[4814]: I1202 14:47:56.847945 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c600570c-ddbe-48ab-84d2-7acb9d546f01-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:56 crc kubenswrapper[4814]: I1202 14:47:56.847957 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gppzb\" (UniqueName: \"kubernetes.io/projected/c600570c-ddbe-48ab-84d2-7acb9d546f01-kube-api-access-gppzb\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.308080 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-76pvq" event={"ID":"c600570c-ddbe-48ab-84d2-7acb9d546f01","Type":"ContainerDied","Data":"c55acd2e3028b02a0fbbd8ea0d3e381b9bf522a984b8b4355f0f14a83b7f0180"} Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.308123 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c55acd2e3028b02a0fbbd8ea0d3e381b9bf522a984b8b4355f0f14a83b7f0180" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.308149 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-76pvq" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.527838 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-chvrx"] Dec 02 14:47:57 crc kubenswrapper[4814]: E1202 14:47:57.528190 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c600570c-ddbe-48ab-84d2-7acb9d546f01" containerName="keystone-db-sync" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.528206 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="c600570c-ddbe-48ab-84d2-7acb9d546f01" containerName="keystone-db-sync" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.528390 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="c600570c-ddbe-48ab-84d2-7acb9d546f01" containerName="keystone-db-sync" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.528896 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.536056 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.536076 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.536405 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.536522 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-d2qcr" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.537880 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.552487 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-chvrx"] Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.662479 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-config-data\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.662546 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-scripts\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.662578 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnbsn\" (UniqueName: \"kubernetes.io/projected/ac76a205-0632-4f33-8a21-18ef72fb2807-kube-api-access-bnbsn\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.662600 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-combined-ca-bundle\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.662628 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-fernet-keys\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.662674 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-credential-keys\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.681856 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.684018 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.689391 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.689391 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.706184 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.763912 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-credential-keys\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.764016 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-run-httpd\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.764055 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-config-data\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.764078 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.764121 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-log-httpd\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.764145 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.764295 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-scripts\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.764332 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnbsn\" (UniqueName: \"kubernetes.io/projected/ac76a205-0632-4f33-8a21-18ef72fb2807-kube-api-access-bnbsn\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.764355 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-combined-ca-bundle\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.764379 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-fernet-keys\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.764423 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nb29\" (UniqueName: \"kubernetes.io/projected/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-kube-api-access-7nb29\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.764452 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-scripts\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.764510 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-config-data\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.768149 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-scripts\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.768441 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-fernet-keys\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.768549 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-credential-keys\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.775944 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-combined-ca-bundle\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.784511 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnbsn\" (UniqueName: \"kubernetes.io/projected/ac76a205-0632-4f33-8a21-18ef72fb2807-kube-api-access-bnbsn\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.786587 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-config-data\") pod \"keystone-bootstrap-chvrx\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.855294 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.865392 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nb29\" (UniqueName: \"kubernetes.io/projected/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-kube-api-access-7nb29\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.865437 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-scripts\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.865485 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-config-data\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.865553 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-run-httpd\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.865578 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.865605 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-log-httpd\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.865623 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.866564 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-run-httpd\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.866687 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-log-httpd\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.871639 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-config-data\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.872066 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.872357 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-scripts\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.872554 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:57 crc kubenswrapper[4814]: I1202 14:47:57.887160 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nb29\" (UniqueName: \"kubernetes.io/projected/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-kube-api-access-7nb29\") pod \"ceilometer-0\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:58 crc kubenswrapper[4814]: I1202 14:47:58.003399 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:47:58 crc kubenswrapper[4814]: I1202 14:47:58.341089 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-chvrx"] Dec 02 14:47:58 crc kubenswrapper[4814]: W1202 14:47:58.345073 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac76a205_0632_4f33_8a21_18ef72fb2807.slice/crio-6c91415b7187a5c88a075622095584285a745dd41ed697c063ee82148c5f8df9 WatchSource:0}: Error finding container 6c91415b7187a5c88a075622095584285a745dd41ed697c063ee82148c5f8df9: Status 404 returned error can't find the container with id 6c91415b7187a5c88a075622095584285a745dd41ed697c063ee82148c5f8df9 Dec 02 14:47:58 crc kubenswrapper[4814]: I1202 14:47:58.351704 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Dec 02 14:47:58 crc kubenswrapper[4814]: I1202 14:47:58.534580 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:47:59 crc kubenswrapper[4814]: I1202 14:47:59.331677 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-chvrx" event={"ID":"ac76a205-0632-4f33-8a21-18ef72fb2807","Type":"ContainerStarted","Data":"0f33ea1fc7a1d6ec1b9f4ee3215d2415f4d23a1fb74688276f005f160bd933d9"} Dec 02 14:47:59 crc kubenswrapper[4814]: I1202 14:47:59.331977 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-chvrx" event={"ID":"ac76a205-0632-4f33-8a21-18ef72fb2807","Type":"ContainerStarted","Data":"6c91415b7187a5c88a075622095584285a745dd41ed697c063ee82148c5f8df9"} Dec 02 14:47:59 crc kubenswrapper[4814]: I1202 14:47:59.333721 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6","Type":"ContainerStarted","Data":"d4d678a67b3060af33ce03f973f7b006562730e7194cd6ab8b81e6355883569e"} Dec 02 14:47:59 crc kubenswrapper[4814]: I1202 14:47:59.356828 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-chvrx" podStartSLOduration=2.356812841 podStartE2EDuration="2.356812841s" podCreationTimestamp="2025-12-02 14:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:47:59.355550739 +0000 UTC m=+1141.479912943" watchObservedRunningTime="2025-12-02 14:47:59.356812841 +0000 UTC m=+1141.481175045" Dec 02 14:47:59 crc kubenswrapper[4814]: I1202 14:47:59.608717 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:48:02 crc kubenswrapper[4814]: I1202 14:48:02.368520 4814 generic.go:334] "Generic (PLEG): container finished" podID="ac76a205-0632-4f33-8a21-18ef72fb2807" containerID="0f33ea1fc7a1d6ec1b9f4ee3215d2415f4d23a1fb74688276f005f160bd933d9" exitCode=0 Dec 02 14:48:02 crc kubenswrapper[4814]: I1202 14:48:02.368602 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-chvrx" event={"ID":"ac76a205-0632-4f33-8a21-18ef72fb2807","Type":"ContainerDied","Data":"0f33ea1fc7a1d6ec1b9f4ee3215d2415f4d23a1fb74688276f005f160bd933d9"} Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.735150 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.882475 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-credential-keys\") pod \"ac76a205-0632-4f33-8a21-18ef72fb2807\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.882569 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-fernet-keys\") pod \"ac76a205-0632-4f33-8a21-18ef72fb2807\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.882635 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-combined-ca-bundle\") pod \"ac76a205-0632-4f33-8a21-18ef72fb2807\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.882827 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-config-data\") pod \"ac76a205-0632-4f33-8a21-18ef72fb2807\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.882908 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-scripts\") pod \"ac76a205-0632-4f33-8a21-18ef72fb2807\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.882935 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnbsn\" (UniqueName: \"kubernetes.io/projected/ac76a205-0632-4f33-8a21-18ef72fb2807-kube-api-access-bnbsn\") pod \"ac76a205-0632-4f33-8a21-18ef72fb2807\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.891941 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ac76a205-0632-4f33-8a21-18ef72fb2807" (UID: "ac76a205-0632-4f33-8a21-18ef72fb2807"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.891957 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac76a205-0632-4f33-8a21-18ef72fb2807-kube-api-access-bnbsn" (OuterVolumeSpecName: "kube-api-access-bnbsn") pod "ac76a205-0632-4f33-8a21-18ef72fb2807" (UID: "ac76a205-0632-4f33-8a21-18ef72fb2807"). InnerVolumeSpecName "kube-api-access-bnbsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.892655 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ac76a205-0632-4f33-8a21-18ef72fb2807" (UID: "ac76a205-0632-4f33-8a21-18ef72fb2807"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.893396 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-scripts" (OuterVolumeSpecName: "scripts") pod "ac76a205-0632-4f33-8a21-18ef72fb2807" (UID: "ac76a205-0632-4f33-8a21-18ef72fb2807"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:48:03 crc kubenswrapper[4814]: E1202 14:48:03.910702 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-config-data podName:ac76a205-0632-4f33-8a21-18ef72fb2807 nodeName:}" failed. No retries permitted until 2025-12-02 14:48:04.41066422 +0000 UTC m=+1146.535026424 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-config-data") pod "ac76a205-0632-4f33-8a21-18ef72fb2807" (UID: "ac76a205-0632-4f33-8a21-18ef72fb2807") : error deleting /var/lib/kubelet/pods/ac76a205-0632-4f33-8a21-18ef72fb2807/volume-subpaths: remove /var/lib/kubelet/pods/ac76a205-0632-4f33-8a21-18ef72fb2807/volume-subpaths: no such file or directory Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.913705 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac76a205-0632-4f33-8a21-18ef72fb2807" (UID: "ac76a205-0632-4f33-8a21-18ef72fb2807"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.986055 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.986094 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnbsn\" (UniqueName: \"kubernetes.io/projected/ac76a205-0632-4f33-8a21-18ef72fb2807-kube-api-access-bnbsn\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.986111 4814 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.986124 4814 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:03 crc kubenswrapper[4814]: I1202 14:48:03.986136 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.390186 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-chvrx" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.390632 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-chvrx" event={"ID":"ac76a205-0632-4f33-8a21-18ef72fb2807","Type":"ContainerDied","Data":"6c91415b7187a5c88a075622095584285a745dd41ed697c063ee82148c5f8df9"} Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.390667 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c91415b7187a5c88a075622095584285a745dd41ed697c063ee82148c5f8df9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.392338 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6","Type":"ContainerStarted","Data":"f1ba3df0aeedae130dd55b4faec2b94da9a23b6b54ce81885e13b7e522eaab6e"} Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.477976 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-chvrx"] Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.484629 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-chvrx"] Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.492566 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-config-data\") pod \"ac76a205-0632-4f33-8a21-18ef72fb2807\" (UID: \"ac76a205-0632-4f33-8a21-18ef72fb2807\") " Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.495753 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-config-data" (OuterVolumeSpecName: "config-data") pod "ac76a205-0632-4f33-8a21-18ef72fb2807" (UID: "ac76a205-0632-4f33-8a21-18ef72fb2807"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.557117 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-xfbr9"] Dec 02 14:48:04 crc kubenswrapper[4814]: E1202 14:48:04.557443 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac76a205-0632-4f33-8a21-18ef72fb2807" containerName="keystone-bootstrap" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.557460 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac76a205-0632-4f33-8a21-18ef72fb2807" containerName="keystone-bootstrap" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.557615 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac76a205-0632-4f33-8a21-18ef72fb2807" containerName="keystone-bootstrap" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.558195 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.569061 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-xfbr9"] Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.594754 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac76a205-0632-4f33-8a21-18ef72fb2807-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.695770 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-fernet-keys\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.696151 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-978tz\" (UniqueName: \"kubernetes.io/projected/eb0c692e-42a7-4e89-b387-8215fd02a76f-kube-api-access-978tz\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.696279 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-credential-keys\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.696311 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-scripts\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.696340 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-combined-ca-bundle\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.696364 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-config-data\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.797858 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-credential-keys\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.797917 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-scripts\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.797952 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-combined-ca-bundle\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.797983 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-config-data\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.798023 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-fernet-keys\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.798086 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-978tz\" (UniqueName: \"kubernetes.io/projected/eb0c692e-42a7-4e89-b387-8215fd02a76f-kube-api-access-978tz\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.802612 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-combined-ca-bundle\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.803058 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-credential-keys\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.804093 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-scripts\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.808980 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-fernet-keys\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.819125 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-978tz\" (UniqueName: \"kubernetes.io/projected/eb0c692e-42a7-4e89-b387-8215fd02a76f-kube-api-access-978tz\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.818921 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-config-data\") pod \"keystone-bootstrap-xfbr9\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:04 crc kubenswrapper[4814]: I1202 14:48:04.874873 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:05 crc kubenswrapper[4814]: I1202 14:48:05.350986 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-xfbr9"] Dec 02 14:48:05 crc kubenswrapper[4814]: I1202 14:48:05.409552 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6","Type":"ContainerStarted","Data":"60cd41c06f4ecbf1bbb682208af1692004c2691c36c4dfebf39b3165cd563389"} Dec 02 14:48:05 crc kubenswrapper[4814]: I1202 14:48:05.411083 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" event={"ID":"eb0c692e-42a7-4e89-b387-8215fd02a76f","Type":"ContainerStarted","Data":"7556ffa22ff50b939c78d7b34cf90c6f756bed7542e8d45ede112b5035262919"} Dec 02 14:48:06 crc kubenswrapper[4814]: I1202 14:48:06.160716 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac76a205-0632-4f33-8a21-18ef72fb2807" path="/var/lib/kubelet/pods/ac76a205-0632-4f33-8a21-18ef72fb2807/volumes" Dec 02 14:48:06 crc kubenswrapper[4814]: I1202 14:48:06.424349 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" event={"ID":"eb0c692e-42a7-4e89-b387-8215fd02a76f","Type":"ContainerStarted","Data":"79dcc93f3eaf042dc51fe4dd3dbd92cdd4b4a0eb8505cabb36243f675f36954b"} Dec 02 14:48:06 crc kubenswrapper[4814]: I1202 14:48:06.455532 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" podStartSLOduration=2.455481204 podStartE2EDuration="2.455481204s" podCreationTimestamp="2025-12-02 14:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:48:06.445037823 +0000 UTC m=+1148.569400027" watchObservedRunningTime="2025-12-02 14:48:06.455481204 +0000 UTC m=+1148.579843408" Dec 02 14:48:10 crc kubenswrapper[4814]: I1202 14:48:10.483740 4814 generic.go:334] "Generic (PLEG): container finished" podID="eb0c692e-42a7-4e89-b387-8215fd02a76f" containerID="79dcc93f3eaf042dc51fe4dd3dbd92cdd4b4a0eb8505cabb36243f675f36954b" exitCode=0 Dec 02 14:48:10 crc kubenswrapper[4814]: I1202 14:48:10.483797 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" event={"ID":"eb0c692e-42a7-4e89-b387-8215fd02a76f","Type":"ContainerDied","Data":"79dcc93f3eaf042dc51fe4dd3dbd92cdd4b4a0eb8505cabb36243f675f36954b"} Dec 02 14:48:17 crc kubenswrapper[4814]: I1202 14:48:17.926113 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.044675 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-config-data\") pod \"eb0c692e-42a7-4e89-b387-8215fd02a76f\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.044744 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-fernet-keys\") pod \"eb0c692e-42a7-4e89-b387-8215fd02a76f\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.044796 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-978tz\" (UniqueName: \"kubernetes.io/projected/eb0c692e-42a7-4e89-b387-8215fd02a76f-kube-api-access-978tz\") pod \"eb0c692e-42a7-4e89-b387-8215fd02a76f\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.044849 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-combined-ca-bundle\") pod \"eb0c692e-42a7-4e89-b387-8215fd02a76f\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.044933 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-credential-keys\") pod \"eb0c692e-42a7-4e89-b387-8215fd02a76f\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.044960 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-scripts\") pod \"eb0c692e-42a7-4e89-b387-8215fd02a76f\" (UID: \"eb0c692e-42a7-4e89-b387-8215fd02a76f\") " Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.051017 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-scripts" (OuterVolumeSpecName: "scripts") pod "eb0c692e-42a7-4e89-b387-8215fd02a76f" (UID: "eb0c692e-42a7-4e89-b387-8215fd02a76f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.051080 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "eb0c692e-42a7-4e89-b387-8215fd02a76f" (UID: "eb0c692e-42a7-4e89-b387-8215fd02a76f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.056506 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0c692e-42a7-4e89-b387-8215fd02a76f-kube-api-access-978tz" (OuterVolumeSpecName: "kube-api-access-978tz") pod "eb0c692e-42a7-4e89-b387-8215fd02a76f" (UID: "eb0c692e-42a7-4e89-b387-8215fd02a76f"). InnerVolumeSpecName "kube-api-access-978tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.056569 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "eb0c692e-42a7-4e89-b387-8215fd02a76f" (UID: "eb0c692e-42a7-4e89-b387-8215fd02a76f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.069737 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-config-data" (OuterVolumeSpecName: "config-data") pod "eb0c692e-42a7-4e89-b387-8215fd02a76f" (UID: "eb0c692e-42a7-4e89-b387-8215fd02a76f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.070829 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb0c692e-42a7-4e89-b387-8215fd02a76f" (UID: "eb0c692e-42a7-4e89-b387-8215fd02a76f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.146902 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.146949 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.146963 4814 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.146977 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-978tz\" (UniqueName: \"kubernetes.io/projected/eb0c692e-42a7-4e89-b387-8215fd02a76f-kube-api-access-978tz\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.146992 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.147004 4814 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb0c692e-42a7-4e89-b387-8215fd02a76f-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.559879 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" event={"ID":"eb0c692e-42a7-4e89-b387-8215fd02a76f","Type":"ContainerDied","Data":"7556ffa22ff50b939c78d7b34cf90c6f756bed7542e8d45ede112b5035262919"} Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.560275 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7556ffa22ff50b939c78d7b34cf90c6f756bed7542e8d45ede112b5035262919" Dec 02 14:48:18 crc kubenswrapper[4814]: I1202 14:48:18.560375 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-xfbr9" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.035430 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-697b49fc9c-nwnxx"] Dec 02 14:48:19 crc kubenswrapper[4814]: E1202 14:48:19.035810 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0c692e-42a7-4e89-b387-8215fd02a76f" containerName="keystone-bootstrap" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.035826 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0c692e-42a7-4e89-b387-8215fd02a76f" containerName="keystone-bootstrap" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.036028 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0c692e-42a7-4e89-b387-8215fd02a76f" containerName="keystone-bootstrap" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.036701 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.038935 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-internal-svc" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.039039 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-d2qcr" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.039213 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-public-svc" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.039259 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.039456 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.039926 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.058674 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-697b49fc9c-nwnxx"] Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.164343 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-credential-keys\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.164416 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-fernet-keys\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.164449 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-combined-ca-bundle\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.164487 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-scripts\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.164816 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-public-tls-certs\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.165062 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-internal-tls-certs\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.165316 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgm92\" (UniqueName: \"kubernetes.io/projected/cd5c8f55-035f-4536-a343-31a0d7cde6c4-kube-api-access-zgm92\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.165471 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-config-data\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.267168 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-public-tls-certs\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.267271 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-internal-tls-certs\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.267289 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgm92\" (UniqueName: \"kubernetes.io/projected/cd5c8f55-035f-4536-a343-31a0d7cde6c4-kube-api-access-zgm92\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.267321 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-config-data\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.267379 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-credential-keys\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.267412 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-fernet-keys\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.267442 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-combined-ca-bundle\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.267480 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-scripts\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.271916 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-scripts\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.272012 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-combined-ca-bundle\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.272803 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-public-tls-certs\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.272812 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-config-data\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.279334 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-fernet-keys\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.279338 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-credential-keys\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.282944 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5c8f55-035f-4536-a343-31a0d7cde6c4-internal-tls-certs\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.292917 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgm92\" (UniqueName: \"kubernetes.io/projected/cd5c8f55-035f-4536-a343-31a0d7cde6c4-kube-api-access-zgm92\") pod \"keystone-697b49fc9c-nwnxx\" (UID: \"cd5c8f55-035f-4536-a343-31a0d7cde6c4\") " pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:19 crc kubenswrapper[4814]: I1202 14:48:19.351270 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:20 crc kubenswrapper[4814]: I1202 14:48:20.590729 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6","Type":"ContainerStarted","Data":"5e19fa7714d8b87a63a72b19fe7b2e518323e4a7b2bac68786fc93b6126a89d7"} Dec 02 14:48:20 crc kubenswrapper[4814]: I1202 14:48:20.651000 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:48:20 crc kubenswrapper[4814]: I1202 14:48:20.651052 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:48:20 crc kubenswrapper[4814]: I1202 14:48:20.686309 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-697b49fc9c-nwnxx"] Dec 02 14:48:20 crc kubenswrapper[4814]: W1202 14:48:20.687999 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd5c8f55_035f_4536_a343_31a0d7cde6c4.slice/crio-775f4d41608868860e5cf423f31c3852e903d68127647542a659ee9d62161f43 WatchSource:0}: Error finding container 775f4d41608868860e5cf423f31c3852e903d68127647542a659ee9d62161f43: Status 404 returned error can't find the container with id 775f4d41608868860e5cf423f31c3852e903d68127647542a659ee9d62161f43 Dec 02 14:48:21 crc kubenswrapper[4814]: I1202 14:48:21.601185 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" event={"ID":"cd5c8f55-035f-4536-a343-31a0d7cde6c4","Type":"ContainerStarted","Data":"a9af76ae47168f50c0197c9738f37e438b8ecd40343eb373d6b949a61513d3a3"} Dec 02 14:48:21 crc kubenswrapper[4814]: I1202 14:48:21.601571 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" event={"ID":"cd5c8f55-035f-4536-a343-31a0d7cde6c4","Type":"ContainerStarted","Data":"775f4d41608868860e5cf423f31c3852e903d68127647542a659ee9d62161f43"} Dec 02 14:48:21 crc kubenswrapper[4814]: I1202 14:48:21.601597 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:32 crc kubenswrapper[4814]: I1202 14:48:32.714962 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6","Type":"ContainerStarted","Data":"3414cff153b6a3183bcbc7ee4ac4a636a80457857eae8c04357d07fdc665ef29"} Dec 02 14:48:32 crc kubenswrapper[4814]: I1202 14:48:32.715631 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:32 crc kubenswrapper[4814]: I1202 14:48:32.715127 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="proxy-httpd" containerID="cri-o://3414cff153b6a3183bcbc7ee4ac4a636a80457857eae8c04357d07fdc665ef29" gracePeriod=30 Dec 02 14:48:32 crc kubenswrapper[4814]: I1202 14:48:32.715109 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="ceilometer-central-agent" containerID="cri-o://f1ba3df0aeedae130dd55b4faec2b94da9a23b6b54ce81885e13b7e522eaab6e" gracePeriod=30 Dec 02 14:48:32 crc kubenswrapper[4814]: I1202 14:48:32.715172 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="sg-core" containerID="cri-o://5e19fa7714d8b87a63a72b19fe7b2e518323e4a7b2bac68786fc93b6126a89d7" gracePeriod=30 Dec 02 14:48:32 crc kubenswrapper[4814]: I1202 14:48:32.715146 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="ceilometer-notification-agent" containerID="cri-o://60cd41c06f4ecbf1bbb682208af1692004c2691c36c4dfebf39b3165cd563389" gracePeriod=30 Dec 02 14:48:32 crc kubenswrapper[4814]: I1202 14:48:32.740541 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" podStartSLOduration=13.740524152 podStartE2EDuration="13.740524152s" podCreationTimestamp="2025-12-02 14:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:48:21.629137859 +0000 UTC m=+1163.753500063" watchObservedRunningTime="2025-12-02 14:48:32.740524152 +0000 UTC m=+1174.864886356" Dec 02 14:48:32 crc kubenswrapper[4814]: I1202 14:48:32.742923 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.097868886 podStartE2EDuration="35.742914441s" podCreationTimestamp="2025-12-02 14:47:57 +0000 UTC" firstStartedPulling="2025-12-02 14:47:58.543448827 +0000 UTC m=+1140.667811041" lastFinishedPulling="2025-12-02 14:48:32.188494392 +0000 UTC m=+1174.312856596" observedRunningTime="2025-12-02 14:48:32.73643093 +0000 UTC m=+1174.860793144" watchObservedRunningTime="2025-12-02 14:48:32.742914441 +0000 UTC m=+1174.867276635" Dec 02 14:48:33 crc kubenswrapper[4814]: I1202 14:48:33.733128 4814 generic.go:334] "Generic (PLEG): container finished" podID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerID="3414cff153b6a3183bcbc7ee4ac4a636a80457857eae8c04357d07fdc665ef29" exitCode=0 Dec 02 14:48:33 crc kubenswrapper[4814]: I1202 14:48:33.733419 4814 generic.go:334] "Generic (PLEG): container finished" podID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerID="5e19fa7714d8b87a63a72b19fe7b2e518323e4a7b2bac68786fc93b6126a89d7" exitCode=2 Dec 02 14:48:33 crc kubenswrapper[4814]: I1202 14:48:33.733428 4814 generic.go:334] "Generic (PLEG): container finished" podID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerID="f1ba3df0aeedae130dd55b4faec2b94da9a23b6b54ce81885e13b7e522eaab6e" exitCode=0 Dec 02 14:48:33 crc kubenswrapper[4814]: I1202 14:48:33.733181 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6","Type":"ContainerDied","Data":"3414cff153b6a3183bcbc7ee4ac4a636a80457857eae8c04357d07fdc665ef29"} Dec 02 14:48:33 crc kubenswrapper[4814]: I1202 14:48:33.733460 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6","Type":"ContainerDied","Data":"5e19fa7714d8b87a63a72b19fe7b2e518323e4a7b2bac68786fc93b6126a89d7"} Dec 02 14:48:33 crc kubenswrapper[4814]: I1202 14:48:33.733474 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6","Type":"ContainerDied","Data":"f1ba3df0aeedae130dd55b4faec2b94da9a23b6b54ce81885e13b7e522eaab6e"} Dec 02 14:48:36 crc kubenswrapper[4814]: E1202 14:48:36.495332 4814 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd85c2d8f_3be8_44e4_9c2e_b0904d723ed6.slice/crio-conmon-60cd41c06f4ecbf1bbb682208af1692004c2691c36c4dfebf39b3165cd563389.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:48:36 crc kubenswrapper[4814]: I1202 14:48:36.764023 4814 generic.go:334] "Generic (PLEG): container finished" podID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerID="60cd41c06f4ecbf1bbb682208af1692004c2691c36c4dfebf39b3165cd563389" exitCode=0 Dec 02 14:48:36 crc kubenswrapper[4814]: I1202 14:48:36.764087 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6","Type":"ContainerDied","Data":"60cd41c06f4ecbf1bbb682208af1692004c2691c36c4dfebf39b3165cd563389"} Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.067972 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.199010 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-scripts\") pod \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.199165 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nb29\" (UniqueName: \"kubernetes.io/projected/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-kube-api-access-7nb29\") pod \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.199204 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-combined-ca-bundle\") pod \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.200354 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-sg-core-conf-yaml\") pod \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.200478 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-log-httpd\") pod \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.200578 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-config-data\") pod \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.200674 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-run-httpd\") pod \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\" (UID: \"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6\") " Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.201098 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" (UID: "d85c2d8f-3be8-44e4-9c2e-b0904d723ed6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.201639 4814 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.202060 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" (UID: "d85c2d8f-3be8-44e4-9c2e-b0904d723ed6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.204618 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-scripts" (OuterVolumeSpecName: "scripts") pod "d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" (UID: "d85c2d8f-3be8-44e4-9c2e-b0904d723ed6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.216616 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-kube-api-access-7nb29" (OuterVolumeSpecName: "kube-api-access-7nb29") pod "d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" (UID: "d85c2d8f-3be8-44e4-9c2e-b0904d723ed6"). InnerVolumeSpecName "kube-api-access-7nb29". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.225084 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" (UID: "d85c2d8f-3be8-44e4-9c2e-b0904d723ed6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.272558 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" (UID: "d85c2d8f-3be8-44e4-9c2e-b0904d723ed6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.281843 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-config-data" (OuterVolumeSpecName: "config-data") pod "d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" (UID: "d85c2d8f-3be8-44e4-9c2e-b0904d723ed6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.302905 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.302950 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.302960 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nb29\" (UniqueName: \"kubernetes.io/projected/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-kube-api-access-7nb29\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.302970 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.302982 4814 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.302990 4814 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.775747 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d85c2d8f-3be8-44e4-9c2e-b0904d723ed6","Type":"ContainerDied","Data":"d4d678a67b3060af33ce03f973f7b006562730e7194cd6ab8b81e6355883569e"} Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.775795 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.776075 4814 scope.go:117] "RemoveContainer" containerID="3414cff153b6a3183bcbc7ee4ac4a636a80457857eae8c04357d07fdc665ef29" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.805979 4814 scope.go:117] "RemoveContainer" containerID="5e19fa7714d8b87a63a72b19fe7b2e518323e4a7b2bac68786fc93b6126a89d7" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.812130 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.818578 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.831451 4814 scope.go:117] "RemoveContainer" containerID="60cd41c06f4ecbf1bbb682208af1692004c2691c36c4dfebf39b3165cd563389" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.859541 4814 scope.go:117] "RemoveContainer" containerID="f1ba3df0aeedae130dd55b4faec2b94da9a23b6b54ce81885e13b7e522eaab6e" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.859581 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:48:37 crc kubenswrapper[4814]: E1202 14:48:37.860109 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="sg-core" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.860135 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="sg-core" Dec 02 14:48:37 crc kubenswrapper[4814]: E1202 14:48:37.860160 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="ceilometer-notification-agent" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.860174 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="ceilometer-notification-agent" Dec 02 14:48:37 crc kubenswrapper[4814]: E1202 14:48:37.860187 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="ceilometer-central-agent" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.860196 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="ceilometer-central-agent" Dec 02 14:48:37 crc kubenswrapper[4814]: E1202 14:48:37.860215 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="proxy-httpd" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.860251 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="proxy-httpd" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.860547 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="ceilometer-notification-agent" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.860576 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="sg-core" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.860590 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="ceilometer-central-agent" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.860605 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" containerName="proxy-httpd" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.862935 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.864937 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.866661 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.873391 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.917189 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-config-data\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.917285 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.917379 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-scripts\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.917442 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-run-httpd\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.917475 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-log-httpd\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.917507 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wblff\" (UniqueName: \"kubernetes.io/projected/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-kube-api-access-wblff\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:37 crc kubenswrapper[4814]: I1202 14:48:37.917598 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.018503 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-scripts\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.018586 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-run-httpd\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.018636 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-log-httpd\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.018662 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wblff\" (UniqueName: \"kubernetes.io/projected/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-kube-api-access-wblff\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.018726 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.018762 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-config-data\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.018783 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.019257 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-log-httpd\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.019346 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-run-httpd\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.022681 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.022868 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-scripts\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.028169 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-config-data\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.033084 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.035988 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wblff\" (UniqueName: \"kubernetes.io/projected/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-kube-api-access-wblff\") pod \"ceilometer-0\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.155497 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d85c2d8f-3be8-44e4-9c2e-b0904d723ed6" path="/var/lib/kubelet/pods/d85c2d8f-3be8-44e4-9c2e-b0904d723ed6/volumes" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.181807 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.716346 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:48:38 crc kubenswrapper[4814]: I1202 14:48:38.785681 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"16ff12c6-e74f-4c04-b3e4-f2cf54533e54","Type":"ContainerStarted","Data":"376abb5e68cb41070f2556ec7ffe49c4d4e0f9d29fd21d286397f20209a09fa2"} Dec 02 14:48:40 crc kubenswrapper[4814]: I1202 14:48:40.811567 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"16ff12c6-e74f-4c04-b3e4-f2cf54533e54","Type":"ContainerStarted","Data":"71fff8a58ff5e8c6eb40573aaeeae89a27cf10f938b411346bfed104658ef7c7"} Dec 02 14:48:42 crc kubenswrapper[4814]: I1202 14:48:42.835607 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"16ff12c6-e74f-4c04-b3e4-f2cf54533e54","Type":"ContainerStarted","Data":"a6678a331f60910f2b05a0039534b477cebb418c884673276c4ccec76f4c6205"} Dec 02 14:48:42 crc kubenswrapper[4814]: I1202 14:48:42.836376 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"16ff12c6-e74f-4c04-b3e4-f2cf54533e54","Type":"ContainerStarted","Data":"0db99766bfc562dab4e64a8cbfb54a4e3a0de4ec2bec7ab98ed8e5350fff1b33"} Dec 02 14:48:44 crc kubenswrapper[4814]: I1202 14:48:44.861460 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"16ff12c6-e74f-4c04-b3e4-f2cf54533e54","Type":"ContainerStarted","Data":"127125aefb9446b86f529c8c394024f2d59613a04cdf74337f4fd983002f9281"} Dec 02 14:48:44 crc kubenswrapper[4814]: I1202 14:48:44.861780 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:48:44 crc kubenswrapper[4814]: I1202 14:48:44.892254 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.262101747 podStartE2EDuration="7.892222883s" podCreationTimestamp="2025-12-02 14:48:37 +0000 UTC" firstStartedPulling="2025-12-02 14:48:38.725777171 +0000 UTC m=+1180.850139375" lastFinishedPulling="2025-12-02 14:48:44.355898307 +0000 UTC m=+1186.480260511" observedRunningTime="2025-12-02 14:48:44.887711111 +0000 UTC m=+1187.012073315" watchObservedRunningTime="2025-12-02 14:48:44.892222883 +0000 UTC m=+1187.016585087" Dec 02 14:48:50 crc kubenswrapper[4814]: I1202 14:48:50.651351 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:48:50 crc kubenswrapper[4814]: I1202 14:48:50.652033 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:48:50 crc kubenswrapper[4814]: I1202 14:48:50.652089 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:48:50 crc kubenswrapper[4814]: I1202 14:48:50.652971 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a790dc6cfa4af470ca8102d397fb8348715084a7eb2e984ef5303da30379ee9"} pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:48:50 crc kubenswrapper[4814]: I1202 14:48:50.653028 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" containerID="cri-o://1a790dc6cfa4af470ca8102d397fb8348715084a7eb2e984ef5303da30379ee9" gracePeriod=600 Dec 02 14:48:52 crc kubenswrapper[4814]: I1202 14:48:52.140938 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-697b49fc9c-nwnxx" Dec 02 14:48:52 crc kubenswrapper[4814]: I1202 14:48:52.944851 4814 generic.go:334] "Generic (PLEG): container finished" podID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerID="1a790dc6cfa4af470ca8102d397fb8348715084a7eb2e984ef5303da30379ee9" exitCode=0 Dec 02 14:48:52 crc kubenswrapper[4814]: I1202 14:48:52.945101 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerDied","Data":"1a790dc6cfa4af470ca8102d397fb8348715084a7eb2e984ef5303da30379ee9"} Dec 02 14:48:52 crc kubenswrapper[4814]: I1202 14:48:52.945508 4814 scope.go:117] "RemoveContainer" containerID="997855c695d121643798ebc9fbdd2b1a6b7bdb08460863ea230c98ff92c4714f" Dec 02 14:48:53 crc kubenswrapper[4814]: I1202 14:48:53.955651 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"5dc327b43bfefefc885573889c7f1e87316bb1d715cad4948d07c7ce0116b628"} Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.314254 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.316902 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.319037 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.319318 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-config-secret" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.322781 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.338309 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstackclient-openstackclient-dockercfg-ztfbn" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.475806 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r4nn\" (UniqueName: \"kubernetes.io/projected/eab12c37-21cd-45ac-9cd9-4d39a91f9192-kube-api-access-7r4nn\") pod \"openstackclient\" (UID: \"eab12c37-21cd-45ac-9cd9-4d39a91f9192\") " pod="watcher-kuttl-default/openstackclient" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.476121 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eab12c37-21cd-45ac-9cd9-4d39a91f9192-openstack-config\") pod \"openstackclient\" (UID: \"eab12c37-21cd-45ac-9cd9-4d39a91f9192\") " pod="watcher-kuttl-default/openstackclient" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.476272 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eab12c37-21cd-45ac-9cd9-4d39a91f9192-openstack-config-secret\") pod \"openstackclient\" (UID: \"eab12c37-21cd-45ac-9cd9-4d39a91f9192\") " pod="watcher-kuttl-default/openstackclient" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.476374 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab12c37-21cd-45ac-9cd9-4d39a91f9192-combined-ca-bundle\") pod \"openstackclient\" (UID: \"eab12c37-21cd-45ac-9cd9-4d39a91f9192\") " pod="watcher-kuttl-default/openstackclient" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.578372 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eab12c37-21cd-45ac-9cd9-4d39a91f9192-openstack-config\") pod \"openstackclient\" (UID: \"eab12c37-21cd-45ac-9cd9-4d39a91f9192\") " pod="watcher-kuttl-default/openstackclient" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.578489 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eab12c37-21cd-45ac-9cd9-4d39a91f9192-openstack-config-secret\") pod \"openstackclient\" (UID: \"eab12c37-21cd-45ac-9cd9-4d39a91f9192\") " pod="watcher-kuttl-default/openstackclient" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.578531 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab12c37-21cd-45ac-9cd9-4d39a91f9192-combined-ca-bundle\") pod \"openstackclient\" (UID: \"eab12c37-21cd-45ac-9cd9-4d39a91f9192\") " pod="watcher-kuttl-default/openstackclient" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.578578 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r4nn\" (UniqueName: \"kubernetes.io/projected/eab12c37-21cd-45ac-9cd9-4d39a91f9192-kube-api-access-7r4nn\") pod \"openstackclient\" (UID: \"eab12c37-21cd-45ac-9cd9-4d39a91f9192\") " pod="watcher-kuttl-default/openstackclient" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.579597 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eab12c37-21cd-45ac-9cd9-4d39a91f9192-openstack-config\") pod \"openstackclient\" (UID: \"eab12c37-21cd-45ac-9cd9-4d39a91f9192\") " pod="watcher-kuttl-default/openstackclient" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.585678 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eab12c37-21cd-45ac-9cd9-4d39a91f9192-openstack-config-secret\") pod \"openstackclient\" (UID: \"eab12c37-21cd-45ac-9cd9-4d39a91f9192\") " pod="watcher-kuttl-default/openstackclient" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.586038 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab12c37-21cd-45ac-9cd9-4d39a91f9192-combined-ca-bundle\") pod \"openstackclient\" (UID: \"eab12c37-21cd-45ac-9cd9-4d39a91f9192\") " pod="watcher-kuttl-default/openstackclient" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.605941 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r4nn\" (UniqueName: \"kubernetes.io/projected/eab12c37-21cd-45ac-9cd9-4d39a91f9192-kube-api-access-7r4nn\") pod \"openstackclient\" (UID: \"eab12c37-21cd-45ac-9cd9-4d39a91f9192\") " pod="watcher-kuttl-default/openstackclient" Dec 02 14:48:57 crc kubenswrapper[4814]: I1202 14:48:57.659517 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Dec 02 14:48:58 crc kubenswrapper[4814]: I1202 14:48:58.128793 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Dec 02 14:48:59 crc kubenswrapper[4814]: I1202 14:48:59.004391 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"eab12c37-21cd-45ac-9cd9-4d39a91f9192","Type":"ContainerStarted","Data":"4d3e953cdbd25eb83ca0c4228c046e05a7ed04adb1d8eafc831c822aecfb7ffc"} Dec 02 14:49:08 crc kubenswrapper[4814]: I1202 14:49:08.188305 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:10 crc kubenswrapper[4814]: I1202 14:49:10.559887 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 14:49:10 crc kubenswrapper[4814]: I1202 14:49:10.560449 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="b0859259-ecec-4400-8770-ed1a52f94dc4" containerName="kube-state-metrics" containerID="cri-o://fe57edb25acfe3cffef930fdfbca2eb7cbe89d239fef03c34db9ed1a3c3c082f" gracePeriod=30 Dec 02 14:49:11 crc kubenswrapper[4814]: I1202 14:49:11.118891 4814 generic.go:334] "Generic (PLEG): container finished" podID="b0859259-ecec-4400-8770-ed1a52f94dc4" containerID="fe57edb25acfe3cffef930fdfbca2eb7cbe89d239fef03c34db9ed1a3c3c082f" exitCode=2 Dec 02 14:49:11 crc kubenswrapper[4814]: I1202 14:49:11.118939 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"b0859259-ecec-4400-8770-ed1a52f94dc4","Type":"ContainerDied","Data":"fe57edb25acfe3cffef930fdfbca2eb7cbe89d239fef03c34db9ed1a3c3c082f"} Dec 02 14:49:11 crc kubenswrapper[4814]: I1202 14:49:11.520990 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:49:11 crc kubenswrapper[4814]: I1202 14:49:11.522050 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="ceilometer-central-agent" containerID="cri-o://71fff8a58ff5e8c6eb40573aaeeae89a27cf10f938b411346bfed104658ef7c7" gracePeriod=30 Dec 02 14:49:11 crc kubenswrapper[4814]: I1202 14:49:11.522100 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="proxy-httpd" containerID="cri-o://127125aefb9446b86f529c8c394024f2d59613a04cdf74337f4fd983002f9281" gracePeriod=30 Dec 02 14:49:11 crc kubenswrapper[4814]: I1202 14:49:11.522125 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="sg-core" containerID="cri-o://a6678a331f60910f2b05a0039534b477cebb418c884673276c4ccec76f4c6205" gracePeriod=30 Dec 02 14:49:11 crc kubenswrapper[4814]: I1202 14:49:11.522122 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="ceilometer-notification-agent" containerID="cri-o://0db99766bfc562dab4e64a8cbfb54a4e3a0de4ec2bec7ab98ed8e5350fff1b33" gracePeriod=30 Dec 02 14:49:13 crc kubenswrapper[4814]: I1202 14:49:13.138372 4814 generic.go:334] "Generic (PLEG): container finished" podID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerID="127125aefb9446b86f529c8c394024f2d59613a04cdf74337f4fd983002f9281" exitCode=0 Dec 02 14:49:13 crc kubenswrapper[4814]: I1202 14:49:13.138614 4814 generic.go:334] "Generic (PLEG): container finished" podID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerID="a6678a331f60910f2b05a0039534b477cebb418c884673276c4ccec76f4c6205" exitCode=2 Dec 02 14:49:13 crc kubenswrapper[4814]: I1202 14:49:13.138625 4814 generic.go:334] "Generic (PLEG): container finished" podID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerID="0db99766bfc562dab4e64a8cbfb54a4e3a0de4ec2bec7ab98ed8e5350fff1b33" exitCode=0 Dec 02 14:49:13 crc kubenswrapper[4814]: I1202 14:49:13.138636 4814 generic.go:334] "Generic (PLEG): container finished" podID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerID="71fff8a58ff5e8c6eb40573aaeeae89a27cf10f938b411346bfed104658ef7c7" exitCode=0 Dec 02 14:49:13 crc kubenswrapper[4814]: I1202 14:49:13.138547 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"16ff12c6-e74f-4c04-b3e4-f2cf54533e54","Type":"ContainerDied","Data":"127125aefb9446b86f529c8c394024f2d59613a04cdf74337f4fd983002f9281"} Dec 02 14:49:13 crc kubenswrapper[4814]: I1202 14:49:13.138675 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"16ff12c6-e74f-4c04-b3e4-f2cf54533e54","Type":"ContainerDied","Data":"a6678a331f60910f2b05a0039534b477cebb418c884673276c4ccec76f4c6205"} Dec 02 14:49:13 crc kubenswrapper[4814]: I1202 14:49:13.138692 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"16ff12c6-e74f-4c04-b3e4-f2cf54533e54","Type":"ContainerDied","Data":"0db99766bfc562dab4e64a8cbfb54a4e3a0de4ec2bec7ab98ed8e5350fff1b33"} Dec 02 14:49:13 crc kubenswrapper[4814]: I1202 14:49:13.138704 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"16ff12c6-e74f-4c04-b3e4-f2cf54533e54","Type":"ContainerDied","Data":"71fff8a58ff5e8c6eb40573aaeeae89a27cf10f938b411346bfed104658ef7c7"} Dec 02 14:49:17 crc kubenswrapper[4814]: I1202 14:49:17.994894 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="b0859259-ecec-4400-8770-ed1a52f94dc4" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:49:27 crc kubenswrapper[4814]: I1202 14:49:27.994362 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="b0859259-ecec-4400-8770-ed1a52f94dc4" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.079104 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.221842 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cdc2\" (UniqueName: \"kubernetes.io/projected/b0859259-ecec-4400-8770-ed1a52f94dc4-kube-api-access-2cdc2\") pod \"b0859259-ecec-4400-8770-ed1a52f94dc4\" (UID: \"b0859259-ecec-4400-8770-ed1a52f94dc4\") " Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.236058 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0859259-ecec-4400-8770-ed1a52f94dc4-kube-api-access-2cdc2" (OuterVolumeSpecName: "kube-api-access-2cdc2") pod "b0859259-ecec-4400-8770-ed1a52f94dc4" (UID: "b0859259-ecec-4400-8770-ed1a52f94dc4"). InnerVolumeSpecName "kube-api-access-2cdc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.323703 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cdc2\" (UniqueName: \"kubernetes.io/projected/b0859259-ecec-4400-8770-ed1a52f94dc4-kube-api-access-2cdc2\") on node \"crc\" DevicePath \"\"" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.327097 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"b0859259-ecec-4400-8770-ed1a52f94dc4","Type":"ContainerDied","Data":"bc9e85b55bb6c8f5b0557307ad9cf6d14eebf127fa9528cf3c560e7f8411a02d"} Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.327157 4814 scope.go:117] "RemoveContainer" containerID="fe57edb25acfe3cffef930fdfbca2eb7cbe89d239fef03c34db9ed1a3c3c082f" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.327305 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.372272 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.383788 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.394277 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 14:49:29 crc kubenswrapper[4814]: E1202 14:49:29.394632 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0859259-ecec-4400-8770-ed1a52f94dc4" containerName="kube-state-metrics" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.394650 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0859259-ecec-4400-8770-ed1a52f94dc4" containerName="kube-state-metrics" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.394800 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0859259-ecec-4400-8770-ed1a52f94dc4" containerName="kube-state-metrics" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.395695 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.399487 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"kube-state-metrics-tls-config" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.399635 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-kube-state-metrics-svc" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.406574 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.438916 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:29 crc kubenswrapper[4814]: E1202 14:49:29.479941 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Dec 02 14:49:29 crc kubenswrapper[4814]: E1202 14:49:29.480136 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbch54bh76h657h5bdh57fhb7h8h94h675h69hd8h575h96h5dfh5ch659hd4h575h566h5d4h5dch5f6h578h5bh668h5dch668h59dh6bh7ch84q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.watcher-kuttl-default.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r4nn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_watcher-kuttl-default(eab12c37-21cd-45ac-9cd9-4d39a91f9192): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:49:29 crc kubenswrapper[4814]: E1202 14:49:29.482398 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/openstackclient" podUID="eab12c37-21cd-45ac-9cd9-4d39a91f9192" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.525882 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-scripts\") pod \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.525989 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-config-data\") pod \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.526020 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-log-httpd\") pod \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.526083 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wblff\" (UniqueName: \"kubernetes.io/projected/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-kube-api-access-wblff\") pod \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.526121 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-run-httpd\") pod \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.526554 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "16ff12c6-e74f-4c04-b3e4-f2cf54533e54" (UID: "16ff12c6-e74f-4c04-b3e4-f2cf54533e54"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.526707 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-combined-ca-bundle\") pod \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.526761 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-sg-core-conf-yaml\") pod \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\" (UID: \"16ff12c6-e74f-4c04-b3e4-f2cf54533e54\") " Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.527061 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "16ff12c6-e74f-4c04-b3e4-f2cf54533e54" (UID: "16ff12c6-e74f-4c04-b3e4-f2cf54533e54"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.527325 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.527391 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7drm\" (UniqueName: \"kubernetes.io/projected/4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c-kube-api-access-d7drm\") pod \"kube-state-metrics-0\" (UID: \"4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.527457 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.527482 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.527566 4814 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.527579 4814 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.530229 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-kube-api-access-wblff" (OuterVolumeSpecName: "kube-api-access-wblff") pod "16ff12c6-e74f-4c04-b3e4-f2cf54533e54" (UID: "16ff12c6-e74f-4c04-b3e4-f2cf54533e54"). InnerVolumeSpecName "kube-api-access-wblff". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.530511 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-scripts" (OuterVolumeSpecName: "scripts") pod "16ff12c6-e74f-4c04-b3e4-f2cf54533e54" (UID: "16ff12c6-e74f-4c04-b3e4-f2cf54533e54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.549658 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "16ff12c6-e74f-4c04-b3e4-f2cf54533e54" (UID: "16ff12c6-e74f-4c04-b3e4-f2cf54533e54"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.607923 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16ff12c6-e74f-4c04-b3e4-f2cf54533e54" (UID: "16ff12c6-e74f-4c04-b3e4-f2cf54533e54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.610471 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-config-data" (OuterVolumeSpecName: "config-data") pod "16ff12c6-e74f-4c04-b3e4-f2cf54533e54" (UID: "16ff12c6-e74f-4c04-b3e4-f2cf54533e54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.629020 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.629065 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.629145 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.629184 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7drm\" (UniqueName: \"kubernetes.io/projected/4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c-kube-api-access-d7drm\") pod \"kube-state-metrics-0\" (UID: \"4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.629550 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.629571 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wblff\" (UniqueName: \"kubernetes.io/projected/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-kube-api-access-wblff\") on node \"crc\" DevicePath \"\"" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.629582 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.629590 4814 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.629598 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ff12c6-e74f-4c04-b3e4-f2cf54533e54-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.632023 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.634358 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.634400 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.643731 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7drm\" (UniqueName: \"kubernetes.io/projected/4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c-kube-api-access-d7drm\") pod \"kube-state-metrics-0\" (UID: \"4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:29 crc kubenswrapper[4814]: I1202 14:49:29.745356 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.155145 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0859259-ecec-4400-8770-ed1a52f94dc4" path="/var/lib/kubelet/pods/b0859259-ecec-4400-8770-ed1a52f94dc4/volumes" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.196274 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.207647 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.337977 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"16ff12c6-e74f-4c04-b3e4-f2cf54533e54","Type":"ContainerDied","Data":"376abb5e68cb41070f2556ec7ffe49c4d4e0f9d29fd21d286397f20209a09fa2"} Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.338072 4814 scope.go:117] "RemoveContainer" containerID="127125aefb9446b86f529c8c394024f2d59613a04cdf74337f4fd983002f9281" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.337995 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.340478 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c","Type":"ContainerStarted","Data":"81dfaa3f7ffe9c4c3cc20f360ce8bd5233ff301597cfedd2396d2a691d87055b"} Dec 02 14:49:30 crc kubenswrapper[4814]: E1202 14:49:30.352496 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="watcher-kuttl-default/openstackclient" podUID="eab12c37-21cd-45ac-9cd9-4d39a91f9192" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.366839 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.368615 4814 scope.go:117] "RemoveContainer" containerID="a6678a331f60910f2b05a0039534b477cebb418c884673276c4ccec76f4c6205" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.382374 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.408724 4814 scope.go:117] "RemoveContainer" containerID="0db99766bfc562dab4e64a8cbfb54a4e3a0de4ec2bec7ab98ed8e5350fff1b33" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.435290 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:49:30 crc kubenswrapper[4814]: E1202 14:49:30.435670 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="proxy-httpd" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.435686 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="proxy-httpd" Dec 02 14:49:30 crc kubenswrapper[4814]: E1202 14:49:30.435701 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="sg-core" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.435709 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="sg-core" Dec 02 14:49:30 crc kubenswrapper[4814]: E1202 14:49:30.435719 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="ceilometer-central-agent" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.435725 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="ceilometer-central-agent" Dec 02 14:49:30 crc kubenswrapper[4814]: E1202 14:49:30.435733 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="ceilometer-notification-agent" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.435739 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="ceilometer-notification-agent" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.435877 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="sg-core" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.435891 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="ceilometer-notification-agent" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.435907 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="proxy-httpd" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.435918 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" containerName="ceilometer-central-agent" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.437330 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.444784 4814 scope.go:117] "RemoveContainer" containerID="71fff8a58ff5e8c6eb40573aaeeae89a27cf10f938b411346bfed104658ef7c7" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.450824 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.451048 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.451394 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.484307 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.552019 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.552066 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-config-data\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.552108 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e291138-9070-4e85-8fbd-607f38631a50-run-httpd\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.552142 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.552178 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e291138-9070-4e85-8fbd-607f38631a50-log-httpd\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.552194 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.552210 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-scripts\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.552229 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpfj8\" (UniqueName: \"kubernetes.io/projected/2e291138-9070-4e85-8fbd-607f38631a50-kube-api-access-vpfj8\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.653409 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e291138-9070-4e85-8fbd-607f38631a50-log-httpd\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.653483 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.653508 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-scripts\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.653532 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpfj8\" (UniqueName: \"kubernetes.io/projected/2e291138-9070-4e85-8fbd-607f38631a50-kube-api-access-vpfj8\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.653613 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.653633 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-config-data\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.653689 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e291138-9070-4e85-8fbd-607f38631a50-run-httpd\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.653738 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.654462 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e291138-9070-4e85-8fbd-607f38631a50-log-httpd\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.654522 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e291138-9070-4e85-8fbd-607f38631a50-run-httpd\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.659985 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-scripts\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.660705 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.660772 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-config-data\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.661096 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.671090 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.679902 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpfj8\" (UniqueName: \"kubernetes.io/projected/2e291138-9070-4e85-8fbd-607f38631a50-kube-api-access-vpfj8\") pod \"ceilometer-0\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:30 crc kubenswrapper[4814]: I1202 14:49:30.805848 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:31 crc kubenswrapper[4814]: I1202 14:49:31.220929 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:49:31 crc kubenswrapper[4814]: W1202 14:49:31.221867 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e291138_9070_4e85_8fbd_607f38631a50.slice/crio-4eab2a0aaea3f0a5322f8c5b441a662912dd2804c716c9d73f52fc1b4b12a7d7 WatchSource:0}: Error finding container 4eab2a0aaea3f0a5322f8c5b441a662912dd2804c716c9d73f52fc1b4b12a7d7: Status 404 returned error can't find the container with id 4eab2a0aaea3f0a5322f8c5b441a662912dd2804c716c9d73f52fc1b4b12a7d7 Dec 02 14:49:31 crc kubenswrapper[4814]: I1202 14:49:31.358278 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c","Type":"ContainerStarted","Data":"4706b30b1a5b4166add3defb25c661fdc0d7c5ddfb674502900ae14e383b61b1"} Dec 02 14:49:31 crc kubenswrapper[4814]: I1202 14:49:31.359357 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:31 crc kubenswrapper[4814]: I1202 14:49:31.359793 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2e291138-9070-4e85-8fbd-607f38631a50","Type":"ContainerStarted","Data":"4eab2a0aaea3f0a5322f8c5b441a662912dd2804c716c9d73f52fc1b4b12a7d7"} Dec 02 14:49:31 crc kubenswrapper[4814]: I1202 14:49:31.378986 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=1.917988738 podStartE2EDuration="2.378959004s" podCreationTimestamp="2025-12-02 14:49:29 +0000 UTC" firstStartedPulling="2025-12-02 14:49:30.207395671 +0000 UTC m=+1232.331757875" lastFinishedPulling="2025-12-02 14:49:30.668365937 +0000 UTC m=+1232.792728141" observedRunningTime="2025-12-02 14:49:31.375636641 +0000 UTC m=+1233.499998855" watchObservedRunningTime="2025-12-02 14:49:31.378959004 +0000 UTC m=+1233.503321208" Dec 02 14:49:32 crc kubenswrapper[4814]: I1202 14:49:32.167257 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ff12c6-e74f-4c04-b3e4-f2cf54533e54" path="/var/lib/kubelet/pods/16ff12c6-e74f-4c04-b3e4-f2cf54533e54/volumes" Dec 02 14:49:33 crc kubenswrapper[4814]: I1202 14:49:33.383060 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2e291138-9070-4e85-8fbd-607f38631a50","Type":"ContainerStarted","Data":"c2918513604157e41daf2ef051f93de29dc755ca502b6d380bbae5f8abb6670c"} Dec 02 14:49:34 crc kubenswrapper[4814]: I1202 14:49:34.392515 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2e291138-9070-4e85-8fbd-607f38631a50","Type":"ContainerStarted","Data":"5d0caabe50f40f1cc187021f4565cc98919a3db5085fe05ef6894e43e98e885b"} Dec 02 14:49:36 crc kubenswrapper[4814]: I1202 14:49:36.407942 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2e291138-9070-4e85-8fbd-607f38631a50","Type":"ContainerStarted","Data":"1ae5c5d024b7a5223ecb1308c8e85a9ddead903a9d83d8d8a3717678d4b18b2e"} Dec 02 14:49:39 crc kubenswrapper[4814]: I1202 14:49:39.441718 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2e291138-9070-4e85-8fbd-607f38631a50","Type":"ContainerStarted","Data":"e08266dacb23b5fd89a9832c764944b8b116f61c49f1730d47b78c3ec97c54bd"} Dec 02 14:49:39 crc kubenswrapper[4814]: I1202 14:49:39.444309 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:49:39 crc kubenswrapper[4814]: I1202 14:49:39.481356 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.162691137 podStartE2EDuration="9.48132809s" podCreationTimestamp="2025-12-02 14:49:30 +0000 UTC" firstStartedPulling="2025-12-02 14:49:31.223978965 +0000 UTC m=+1233.348341169" lastFinishedPulling="2025-12-02 14:49:38.542615918 +0000 UTC m=+1240.666978122" observedRunningTime="2025-12-02 14:49:39.477640098 +0000 UTC m=+1241.602002352" watchObservedRunningTime="2025-12-02 14:49:39.48132809 +0000 UTC m=+1241.605690324" Dec 02 14:49:39 crc kubenswrapper[4814]: I1202 14:49:39.759766 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 14:49:46 crc kubenswrapper[4814]: I1202 14:49:46.504841 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"eab12c37-21cd-45ac-9cd9-4d39a91f9192","Type":"ContainerStarted","Data":"1d75be0f896420f6c4efad4ad36fde98385ac1e4ded3be1b5a11d8cf15412bee"} Dec 02 14:49:46 crc kubenswrapper[4814]: I1202 14:49:46.536108 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstackclient" podStartSLOduration=2.008625359 podStartE2EDuration="49.536086666s" podCreationTimestamp="2025-12-02 14:48:57 +0000 UTC" firstStartedPulling="2025-12-02 14:48:58.152055697 +0000 UTC m=+1200.276417921" lastFinishedPulling="2025-12-02 14:49:45.679517024 +0000 UTC m=+1247.803879228" observedRunningTime="2025-12-02 14:49:46.525781448 +0000 UTC m=+1248.650143712" watchObservedRunningTime="2025-12-02 14:49:46.536086666 +0000 UTC m=+1248.660448890" Dec 02 14:50:00 crc kubenswrapper[4814]: I1202 14:50:00.815817 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.129333 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-ftgh5"] Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.131352 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-ftgh5" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.162484 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-857c-account-create-update-62bvp"] Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.163871 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-857c-account-create-update-62bvp" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.166525 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.170494 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-ftgh5"] Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.183348 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-857c-account-create-update-62bvp"] Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.287360 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6qz6\" (UniqueName: \"kubernetes.io/projected/45e73995-fe53-4756-81da-3c05cda569f8-kube-api-access-q6qz6\") pod \"watcher-857c-account-create-update-62bvp\" (UID: \"45e73995-fe53-4756-81da-3c05cda569f8\") " pod="watcher-kuttl-default/watcher-857c-account-create-update-62bvp" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.287447 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bde7a121-3624-453e-9613-df325f320c51-operator-scripts\") pod \"watcher-db-create-ftgh5\" (UID: \"bde7a121-3624-453e-9613-df325f320c51\") " pod="watcher-kuttl-default/watcher-db-create-ftgh5" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.287526 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45e73995-fe53-4756-81da-3c05cda569f8-operator-scripts\") pod \"watcher-857c-account-create-update-62bvp\" (UID: \"45e73995-fe53-4756-81da-3c05cda569f8\") " pod="watcher-kuttl-default/watcher-857c-account-create-update-62bvp" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.287689 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktxqk\" (UniqueName: \"kubernetes.io/projected/bde7a121-3624-453e-9613-df325f320c51-kube-api-access-ktxqk\") pod \"watcher-db-create-ftgh5\" (UID: \"bde7a121-3624-453e-9613-df325f320c51\") " pod="watcher-kuttl-default/watcher-db-create-ftgh5" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.389288 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktxqk\" (UniqueName: \"kubernetes.io/projected/bde7a121-3624-453e-9613-df325f320c51-kube-api-access-ktxqk\") pod \"watcher-db-create-ftgh5\" (UID: \"bde7a121-3624-453e-9613-df325f320c51\") " pod="watcher-kuttl-default/watcher-db-create-ftgh5" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.389841 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6qz6\" (UniqueName: \"kubernetes.io/projected/45e73995-fe53-4756-81da-3c05cda569f8-kube-api-access-q6qz6\") pod \"watcher-857c-account-create-update-62bvp\" (UID: \"45e73995-fe53-4756-81da-3c05cda569f8\") " pod="watcher-kuttl-default/watcher-857c-account-create-update-62bvp" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.389887 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bde7a121-3624-453e-9613-df325f320c51-operator-scripts\") pod \"watcher-db-create-ftgh5\" (UID: \"bde7a121-3624-453e-9613-df325f320c51\") " pod="watcher-kuttl-default/watcher-db-create-ftgh5" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.389933 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45e73995-fe53-4756-81da-3c05cda569f8-operator-scripts\") pod \"watcher-857c-account-create-update-62bvp\" (UID: \"45e73995-fe53-4756-81da-3c05cda569f8\") " pod="watcher-kuttl-default/watcher-857c-account-create-update-62bvp" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.390758 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bde7a121-3624-453e-9613-df325f320c51-operator-scripts\") pod \"watcher-db-create-ftgh5\" (UID: \"bde7a121-3624-453e-9613-df325f320c51\") " pod="watcher-kuttl-default/watcher-db-create-ftgh5" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.390943 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45e73995-fe53-4756-81da-3c05cda569f8-operator-scripts\") pod \"watcher-857c-account-create-update-62bvp\" (UID: \"45e73995-fe53-4756-81da-3c05cda569f8\") " pod="watcher-kuttl-default/watcher-857c-account-create-update-62bvp" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.407499 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6qz6\" (UniqueName: \"kubernetes.io/projected/45e73995-fe53-4756-81da-3c05cda569f8-kube-api-access-q6qz6\") pod \"watcher-857c-account-create-update-62bvp\" (UID: \"45e73995-fe53-4756-81da-3c05cda569f8\") " pod="watcher-kuttl-default/watcher-857c-account-create-update-62bvp" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.409979 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktxqk\" (UniqueName: \"kubernetes.io/projected/bde7a121-3624-453e-9613-df325f320c51-kube-api-access-ktxqk\") pod \"watcher-db-create-ftgh5\" (UID: \"bde7a121-3624-453e-9613-df325f320c51\") " pod="watcher-kuttl-default/watcher-db-create-ftgh5" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.461433 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-ftgh5" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.485447 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-857c-account-create-update-62bvp" Dec 02 14:50:06 crc kubenswrapper[4814]: I1202 14:50:06.958363 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-ftgh5"] Dec 02 14:50:07 crc kubenswrapper[4814]: I1202 14:50:07.015543 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-857c-account-create-update-62bvp"] Dec 02 14:50:07 crc kubenswrapper[4814]: I1202 14:50:07.722944 4814 generic.go:334] "Generic (PLEG): container finished" podID="45e73995-fe53-4756-81da-3c05cda569f8" containerID="ebc070539c5c7c5efbb726b265158123e733f6c5e0dcbd5fc92a6affdf127e84" exitCode=0 Dec 02 14:50:07 crc kubenswrapper[4814]: I1202 14:50:07.722995 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-857c-account-create-update-62bvp" event={"ID":"45e73995-fe53-4756-81da-3c05cda569f8","Type":"ContainerDied","Data":"ebc070539c5c7c5efbb726b265158123e733f6c5e0dcbd5fc92a6affdf127e84"} Dec 02 14:50:07 crc kubenswrapper[4814]: I1202 14:50:07.723420 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-857c-account-create-update-62bvp" event={"ID":"45e73995-fe53-4756-81da-3c05cda569f8","Type":"ContainerStarted","Data":"56f61f9f0d4ee0cfac78b99b67e51a679fb4336fdb0ec22d257d960198245a1b"} Dec 02 14:50:07 crc kubenswrapper[4814]: I1202 14:50:07.726448 4814 generic.go:334] "Generic (PLEG): container finished" podID="bde7a121-3624-453e-9613-df325f320c51" containerID="b378beec2d4b2e1b10b2ab9decd4e32d0356f651fe2fcd57b44500b76a47a033" exitCode=0 Dec 02 14:50:07 crc kubenswrapper[4814]: I1202 14:50:07.726511 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-ftgh5" event={"ID":"bde7a121-3624-453e-9613-df325f320c51","Type":"ContainerDied","Data":"b378beec2d4b2e1b10b2ab9decd4e32d0356f651fe2fcd57b44500b76a47a033"} Dec 02 14:50:07 crc kubenswrapper[4814]: I1202 14:50:07.726554 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-ftgh5" event={"ID":"bde7a121-3624-453e-9613-df325f320c51","Type":"ContainerStarted","Data":"20c459753d19d679d67145c14d1c9ad86165ed5dc74da4b076d3450912d3d5d0"} Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.126438 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-857c-account-create-update-62bvp" Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.154858 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-ftgh5" Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.244056 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6qz6\" (UniqueName: \"kubernetes.io/projected/45e73995-fe53-4756-81da-3c05cda569f8-kube-api-access-q6qz6\") pod \"45e73995-fe53-4756-81da-3c05cda569f8\" (UID: \"45e73995-fe53-4756-81da-3c05cda569f8\") " Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.244138 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45e73995-fe53-4756-81da-3c05cda569f8-operator-scripts\") pod \"45e73995-fe53-4756-81da-3c05cda569f8\" (UID: \"45e73995-fe53-4756-81da-3c05cda569f8\") " Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.245243 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e73995-fe53-4756-81da-3c05cda569f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45e73995-fe53-4756-81da-3c05cda569f8" (UID: "45e73995-fe53-4756-81da-3c05cda569f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.251901 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e73995-fe53-4756-81da-3c05cda569f8-kube-api-access-q6qz6" (OuterVolumeSpecName: "kube-api-access-q6qz6") pod "45e73995-fe53-4756-81da-3c05cda569f8" (UID: "45e73995-fe53-4756-81da-3c05cda569f8"). InnerVolumeSpecName "kube-api-access-q6qz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.345700 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktxqk\" (UniqueName: \"kubernetes.io/projected/bde7a121-3624-453e-9613-df325f320c51-kube-api-access-ktxqk\") pod \"bde7a121-3624-453e-9613-df325f320c51\" (UID: \"bde7a121-3624-453e-9613-df325f320c51\") " Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.345880 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bde7a121-3624-453e-9613-df325f320c51-operator-scripts\") pod \"bde7a121-3624-453e-9613-df325f320c51\" (UID: \"bde7a121-3624-453e-9613-df325f320c51\") " Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.346565 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bde7a121-3624-453e-9613-df325f320c51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bde7a121-3624-453e-9613-df325f320c51" (UID: "bde7a121-3624-453e-9613-df325f320c51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.347410 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6qz6\" (UniqueName: \"kubernetes.io/projected/45e73995-fe53-4756-81da-3c05cda569f8-kube-api-access-q6qz6\") on node \"crc\" DevicePath \"\"" Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.347460 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45e73995-fe53-4756-81da-3c05cda569f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.347475 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bde7a121-3624-453e-9613-df325f320c51-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.350913 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde7a121-3624-453e-9613-df325f320c51-kube-api-access-ktxqk" (OuterVolumeSpecName: "kube-api-access-ktxqk") pod "bde7a121-3624-453e-9613-df325f320c51" (UID: "bde7a121-3624-453e-9613-df325f320c51"). InnerVolumeSpecName "kube-api-access-ktxqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.448976 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktxqk\" (UniqueName: \"kubernetes.io/projected/bde7a121-3624-453e-9613-df325f320c51-kube-api-access-ktxqk\") on node \"crc\" DevicePath \"\"" Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.753386 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-857c-account-create-update-62bvp" Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.755632 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-857c-account-create-update-62bvp" event={"ID":"45e73995-fe53-4756-81da-3c05cda569f8","Type":"ContainerDied","Data":"56f61f9f0d4ee0cfac78b99b67e51a679fb4336fdb0ec22d257d960198245a1b"} Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.755791 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56f61f9f0d4ee0cfac78b99b67e51a679fb4336fdb0ec22d257d960198245a1b" Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.757442 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-ftgh5" event={"ID":"bde7a121-3624-453e-9613-df325f320c51","Type":"ContainerDied","Data":"20c459753d19d679d67145c14d1c9ad86165ed5dc74da4b076d3450912d3d5d0"} Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.757697 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20c459753d19d679d67145c14d1c9ad86165ed5dc74da4b076d3450912d3d5d0" Dec 02 14:50:09 crc kubenswrapper[4814]: I1202 14:50:09.757826 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-ftgh5" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.662859 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8"] Dec 02 14:50:11 crc kubenswrapper[4814]: E1202 14:50:11.663961 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde7a121-3624-453e-9613-df325f320c51" containerName="mariadb-database-create" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.663990 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde7a121-3624-453e-9613-df325f320c51" containerName="mariadb-database-create" Dec 02 14:50:11 crc kubenswrapper[4814]: E1202 14:50:11.664052 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e73995-fe53-4756-81da-3c05cda569f8" containerName="mariadb-account-create-update" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.664068 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e73995-fe53-4756-81da-3c05cda569f8" containerName="mariadb-account-create-update" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.664490 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde7a121-3624-453e-9613-df325f320c51" containerName="mariadb-database-create" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.664534 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e73995-fe53-4756-81da-3c05cda569f8" containerName="mariadb-account-create-update" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.665781 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.668937 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-xm2dn" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.669962 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.673109 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8"] Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.787149 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-fl8m8\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.787194 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-db-sync-config-data\") pod \"watcher-kuttl-db-sync-fl8m8\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.787296 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-config-data\") pod \"watcher-kuttl-db-sync-fl8m8\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.787326 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7qw6\" (UniqueName: \"kubernetes.io/projected/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-kube-api-access-n7qw6\") pod \"watcher-kuttl-db-sync-fl8m8\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.889218 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-config-data\") pod \"watcher-kuttl-db-sync-fl8m8\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.890370 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7qw6\" (UniqueName: \"kubernetes.io/projected/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-kube-api-access-n7qw6\") pod \"watcher-kuttl-db-sync-fl8m8\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.890494 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-fl8m8\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.890519 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-db-sync-config-data\") pod \"watcher-kuttl-db-sync-fl8m8\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.895934 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-db-sync-config-data\") pod \"watcher-kuttl-db-sync-fl8m8\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.896379 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-config-data\") pod \"watcher-kuttl-db-sync-fl8m8\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.900740 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-fl8m8\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.926934 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7qw6\" (UniqueName: \"kubernetes.io/projected/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-kube-api-access-n7qw6\") pod \"watcher-kuttl-db-sync-fl8m8\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:11 crc kubenswrapper[4814]: I1202 14:50:11.998851 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:12 crc kubenswrapper[4814]: I1202 14:50:12.330131 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8"] Dec 02 14:50:12 crc kubenswrapper[4814]: I1202 14:50:12.787164 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" event={"ID":"593ae4fb-ae06-4616-b78d-ba6914c1b1a5","Type":"ContainerStarted","Data":"10f7bd11fa4c670603d4db2826f40c854e608dc331f9d80117318eef79c22264"} Dec 02 14:50:29 crc kubenswrapper[4814]: E1202 14:50:29.652734 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.158:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Dec 02 14:50:29 crc kubenswrapper[4814]: E1202 14:50:29.653290 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.158:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Dec 02 14:50:29 crc kubenswrapper[4814]: E1202 14:50:29.653415 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-kuttl-db-sync,Image:38.102.83.158:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7qw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-kuttl-db-sync-fl8m8_watcher-kuttl-default(593ae4fb-ae06-4616-b78d-ba6914c1b1a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:50:29 crc kubenswrapper[4814]: E1202 14:50:29.654583 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" podUID="593ae4fb-ae06-4616-b78d-ba6914c1b1a5" Dec 02 14:50:29 crc kubenswrapper[4814]: E1202 14:50:29.930602 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.158:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" podUID="593ae4fb-ae06-4616-b78d-ba6914c1b1a5" Dec 02 14:50:45 crc kubenswrapper[4814]: I1202 14:50:45.064145 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" event={"ID":"593ae4fb-ae06-4616-b78d-ba6914c1b1a5","Type":"ContainerStarted","Data":"4fa84db1dbe6c47a044d4c1e1650949e09e074409f5b331888f1743687d5ef4a"} Dec 02 14:50:45 crc kubenswrapper[4814]: I1202 14:50:45.097746 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" podStartSLOduration=1.951867979 podStartE2EDuration="34.097715056s" podCreationTimestamp="2025-12-02 14:50:11 +0000 UTC" firstStartedPulling="2025-12-02 14:50:12.340421165 +0000 UTC m=+1274.464783369" lastFinishedPulling="2025-12-02 14:50:44.486268202 +0000 UTC m=+1306.610630446" observedRunningTime="2025-12-02 14:50:45.087466439 +0000 UTC m=+1307.211828713" watchObservedRunningTime="2025-12-02 14:50:45.097715056 +0000 UTC m=+1307.222077290" Dec 02 14:50:49 crc kubenswrapper[4814]: I1202 14:50:49.100253 4814 generic.go:334] "Generic (PLEG): container finished" podID="593ae4fb-ae06-4616-b78d-ba6914c1b1a5" containerID="4fa84db1dbe6c47a044d4c1e1650949e09e074409f5b331888f1743687d5ef4a" exitCode=0 Dec 02 14:50:49 crc kubenswrapper[4814]: I1202 14:50:49.100337 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" event={"ID":"593ae4fb-ae06-4616-b78d-ba6914c1b1a5","Type":"ContainerDied","Data":"4fa84db1dbe6c47a044d4c1e1650949e09e074409f5b331888f1743687d5ef4a"} Dec 02 14:50:50 crc kubenswrapper[4814]: I1202 14:50:50.390468 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:50 crc kubenswrapper[4814]: I1202 14:50:50.439334 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-combined-ca-bundle\") pod \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " Dec 02 14:50:50 crc kubenswrapper[4814]: I1202 14:50:50.439399 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7qw6\" (UniqueName: \"kubernetes.io/projected/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-kube-api-access-n7qw6\") pod \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " Dec 02 14:50:50 crc kubenswrapper[4814]: I1202 14:50:50.439505 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-config-data\") pod \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " Dec 02 14:50:50 crc kubenswrapper[4814]: I1202 14:50:50.439568 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-db-sync-config-data\") pod \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\" (UID: \"593ae4fb-ae06-4616-b78d-ba6914c1b1a5\") " Dec 02 14:50:50 crc kubenswrapper[4814]: I1202 14:50:50.445438 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "593ae4fb-ae06-4616-b78d-ba6914c1b1a5" (UID: "593ae4fb-ae06-4616-b78d-ba6914c1b1a5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:50:50 crc kubenswrapper[4814]: I1202 14:50:50.446803 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-kube-api-access-n7qw6" (OuterVolumeSpecName: "kube-api-access-n7qw6") pod "593ae4fb-ae06-4616-b78d-ba6914c1b1a5" (UID: "593ae4fb-ae06-4616-b78d-ba6914c1b1a5"). InnerVolumeSpecName "kube-api-access-n7qw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:50:50 crc kubenswrapper[4814]: I1202 14:50:50.481526 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "593ae4fb-ae06-4616-b78d-ba6914c1b1a5" (UID: "593ae4fb-ae06-4616-b78d-ba6914c1b1a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:50:50 crc kubenswrapper[4814]: I1202 14:50:50.501542 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-config-data" (OuterVolumeSpecName: "config-data") pod "593ae4fb-ae06-4616-b78d-ba6914c1b1a5" (UID: "593ae4fb-ae06-4616-b78d-ba6914c1b1a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:50:50 crc kubenswrapper[4814]: I1202 14:50:50.541911 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:50:50 crc kubenswrapper[4814]: I1202 14:50:50.541951 4814 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:50:50 crc kubenswrapper[4814]: I1202 14:50:50.541966 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:50:50 crc kubenswrapper[4814]: I1202 14:50:50.541983 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7qw6\" (UniqueName: \"kubernetes.io/projected/593ae4fb-ae06-4616-b78d-ba6914c1b1a5-kube-api-access-n7qw6\") on node \"crc\" DevicePath \"\"" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.116974 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" event={"ID":"593ae4fb-ae06-4616-b78d-ba6914c1b1a5","Type":"ContainerDied","Data":"10f7bd11fa4c670603d4db2826f40c854e608dc331f9d80117318eef79c22264"} Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.117012 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10f7bd11fa4c670603d4db2826f40c854e608dc331f9d80117318eef79c22264" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.117085 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.490939 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:50:51 crc kubenswrapper[4814]: E1202 14:50:51.491374 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593ae4fb-ae06-4616-b78d-ba6914c1b1a5" containerName="watcher-kuttl-db-sync" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.491388 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="593ae4fb-ae06-4616-b78d-ba6914c1b1a5" containerName="watcher-kuttl-db-sync" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.491560 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="593ae4fb-ae06-4616-b78d-ba6914c1b1a5" containerName="watcher-kuttl-db-sync" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.492599 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.496285 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-xm2dn" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.496477 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.517396 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.518993 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.521574 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.531375 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.562167 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhrlw\" (UniqueName: \"kubernetes.io/projected/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-kube-api-access-hhrlw\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.562490 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.562855 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.562898 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.562919 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.584113 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.613298 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.614683 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.623864 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.626598 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.665213 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.665490 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.665575 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.665690 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.665831 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.665945 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhrlw\" (UniqueName: \"kubernetes.io/projected/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-kube-api-access-hhrlw\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.666031 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.666142 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k7bx\" (UniqueName: \"kubernetes.io/projected/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-kube-api-access-2k7bx\") pod \"watcher-kuttl-applier-0\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.666258 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.667814 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.671320 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.679449 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.686163 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.691966 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhrlw\" (UniqueName: \"kubernetes.io/projected/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-kube-api-access-hhrlw\") pod \"watcher-kuttl-api-0\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.767243 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.767302 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.767328 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8x45\" (UniqueName: \"kubernetes.io/projected/58c4482d-bd8b-480c-b72c-a117f0c59288-kube-api-access-q8x45\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.767364 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c4482d-bd8b-480c-b72c-a117f0c59288-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.767390 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k7bx\" (UniqueName: \"kubernetes.io/projected/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-kube-api-access-2k7bx\") pod \"watcher-kuttl-applier-0\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.767411 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.767428 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.767467 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.767499 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.768151 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.771797 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.774923 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.784111 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k7bx\" (UniqueName: \"kubernetes.io/projected/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-kube-api-access-2k7bx\") pod \"watcher-kuttl-applier-0\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.813085 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.836429 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.869130 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c4482d-bd8b-480c-b72c-a117f0c59288-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.869185 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.869249 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.869286 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.869349 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8x45\" (UniqueName: \"kubernetes.io/projected/58c4482d-bd8b-480c-b72c-a117f0c59288-kube-api-access-q8x45\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.869656 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c4482d-bd8b-480c-b72c-a117f0c59288-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.874754 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.875016 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.876933 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.886897 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8x45\" (UniqueName: \"kubernetes.io/projected/58c4482d-bd8b-480c-b72c-a117f0c59288-kube-api-access-q8x45\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:51 crc kubenswrapper[4814]: I1202 14:50:51.953036 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:50:52 crc kubenswrapper[4814]: I1202 14:50:52.237474 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:50:52 crc kubenswrapper[4814]: W1202 14:50:52.248005 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58c4482d_bd8b_480c_b72c_a117f0c59288.slice/crio-d006757e3bed6d7d02d50d8b0ccc9bdf88ed59dd6653e0e38a7099468ccba18e WatchSource:0}: Error finding container d006757e3bed6d7d02d50d8b0ccc9bdf88ed59dd6653e0e38a7099468ccba18e: Status 404 returned error can't find the container with id d006757e3bed6d7d02d50d8b0ccc9bdf88ed59dd6653e0e38a7099468ccba18e Dec 02 14:50:52 crc kubenswrapper[4814]: I1202 14:50:52.277222 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:50:52 crc kubenswrapper[4814]: W1202 14:50:52.285456 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ccf0f7e_6682_4f67_b164_a3b6ee53133e.slice/crio-efc38813fd760e105c2a86a5a56006d332d5ed25d31a71e5801435d129df8d43 WatchSource:0}: Error finding container efc38813fd760e105c2a86a5a56006d332d5ed25d31a71e5801435d129df8d43: Status 404 returned error can't find the container with id efc38813fd760e105c2a86a5a56006d332d5ed25d31a71e5801435d129df8d43 Dec 02 14:50:52 crc kubenswrapper[4814]: I1202 14:50:52.360300 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:50:52 crc kubenswrapper[4814]: W1202 14:50:52.366346 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf93acdc_d9c1_4ba8_8628_8ec9e51d5023.slice/crio-41bebffbb65944eb210dff7da2d6670b1b631edfe767988967278b5a8177b790 WatchSource:0}: Error finding container 41bebffbb65944eb210dff7da2d6670b1b631edfe767988967278b5a8177b790: Status 404 returned error can't find the container with id 41bebffbb65944eb210dff7da2d6670b1b631edfe767988967278b5a8177b790 Dec 02 14:50:53 crc kubenswrapper[4814]: I1202 14:50:53.133956 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5ccf0f7e-6682-4f67-b164-a3b6ee53133e","Type":"ContainerStarted","Data":"b2084e641484dbf83348f421a2ab06e85aa32eb58b71e32a614290a82c7e2a6b"} Dec 02 14:50:53 crc kubenswrapper[4814]: I1202 14:50:53.134274 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5ccf0f7e-6682-4f67-b164-a3b6ee53133e","Type":"ContainerStarted","Data":"bd0e23330a85d7d8b47fefac89db45de4ca3d83f17f2caa7956780ae089a2289"} Dec 02 14:50:53 crc kubenswrapper[4814]: I1202 14:50:53.134289 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5ccf0f7e-6682-4f67-b164-a3b6ee53133e","Type":"ContainerStarted","Data":"efc38813fd760e105c2a86a5a56006d332d5ed25d31a71e5801435d129df8d43"} Dec 02 14:50:53 crc kubenswrapper[4814]: I1202 14:50:53.135245 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:53 crc kubenswrapper[4814]: I1202 14:50:53.137089 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"58c4482d-bd8b-480c-b72c-a117f0c59288","Type":"ContainerStarted","Data":"d006757e3bed6d7d02d50d8b0ccc9bdf88ed59dd6653e0e38a7099468ccba18e"} Dec 02 14:50:53 crc kubenswrapper[4814]: I1202 14:50:53.138956 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"df93acdc-d9c1-4ba8-8628-8ec9e51d5023","Type":"ContainerStarted","Data":"41bebffbb65944eb210dff7da2d6670b1b631edfe767988967278b5a8177b790"} Dec 02 14:50:53 crc kubenswrapper[4814]: I1202 14:50:53.160899 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.160875195 podStartE2EDuration="2.160875195s" podCreationTimestamp="2025-12-02 14:50:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:50:53.154001913 +0000 UTC m=+1315.278364117" watchObservedRunningTime="2025-12-02 14:50:53.160875195 +0000 UTC m=+1315.285237399" Dec 02 14:50:54 crc kubenswrapper[4814]: I1202 14:50:54.157496 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"58c4482d-bd8b-480c-b72c-a117f0c59288","Type":"ContainerStarted","Data":"c3df678d115623af392b3911e1cc6af49d5662dd9c585ba89eed8aa728a0715c"} Dec 02 14:50:54 crc kubenswrapper[4814]: I1202 14:50:54.160666 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"df93acdc-d9c1-4ba8-8628-8ec9e51d5023","Type":"ContainerStarted","Data":"3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e"} Dec 02 14:50:54 crc kubenswrapper[4814]: I1202 14:50:54.209109 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.2577738480000002 podStartE2EDuration="3.209083113s" podCreationTimestamp="2025-12-02 14:50:51 +0000 UTC" firstStartedPulling="2025-12-02 14:50:52.251656723 +0000 UTC m=+1314.376018927" lastFinishedPulling="2025-12-02 14:50:53.202965988 +0000 UTC m=+1315.327328192" observedRunningTime="2025-12-02 14:50:54.199664447 +0000 UTC m=+1316.324026661" watchObservedRunningTime="2025-12-02 14:50:54.209083113 +0000 UTC m=+1316.333445317" Dec 02 14:50:54 crc kubenswrapper[4814]: I1202 14:50:54.230471 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.39177439 podStartE2EDuration="3.230441837s" podCreationTimestamp="2025-12-02 14:50:51 +0000 UTC" firstStartedPulling="2025-12-02 14:50:52.368859385 +0000 UTC m=+1314.493221609" lastFinishedPulling="2025-12-02 14:50:53.207526852 +0000 UTC m=+1315.331889056" observedRunningTime="2025-12-02 14:50:54.213908113 +0000 UTC m=+1316.338270327" watchObservedRunningTime="2025-12-02 14:50:54.230441837 +0000 UTC m=+1316.354804041" Dec 02 14:50:55 crc kubenswrapper[4814]: I1202 14:50:55.173348 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:50:55 crc kubenswrapper[4814]: I1202 14:50:55.263882 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:56 crc kubenswrapper[4814]: I1202 14:50:56.814374 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:50:56 crc kubenswrapper[4814]: I1202 14:50:56.838141 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:01 crc kubenswrapper[4814]: I1202 14:51:01.814743 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:01 crc kubenswrapper[4814]: I1202 14:51:01.821933 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:01 crc kubenswrapper[4814]: I1202 14:51:01.838197 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:01 crc kubenswrapper[4814]: I1202 14:51:01.873031 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:01 crc kubenswrapper[4814]: I1202 14:51:01.953842 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:01 crc kubenswrapper[4814]: I1202 14:51:01.997590 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:02 crc kubenswrapper[4814]: I1202 14:51:02.228421 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:02 crc kubenswrapper[4814]: I1202 14:51:02.233334 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:02 crc kubenswrapper[4814]: I1202 14:51:02.263559 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:02 crc kubenswrapper[4814]: I1202 14:51:02.271385 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.524717 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.525336 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="ceilometer-central-agent" containerID="cri-o://c2918513604157e41daf2ef051f93de29dc755ca502b6d380bbae5f8abb6670c" gracePeriod=30 Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.525378 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="proxy-httpd" containerID="cri-o://e08266dacb23b5fd89a9832c764944b8b116f61c49f1730d47b78c3ec97c54bd" gracePeriod=30 Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.525425 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="ceilometer-notification-agent" containerID="cri-o://5d0caabe50f40f1cc187021f4565cc98919a3db5085fe05ef6894e43e98e885b" gracePeriod=30 Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.525452 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="sg-core" containerID="cri-o://1ae5c5d024b7a5223ecb1308c8e85a9ddead903a9d83d8d8a3717678d4b18b2e" gracePeriod=30 Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.601878 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8"] Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.609487 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-fl8m8"] Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.625697 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher857c-account-delete-fg648"] Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.626819 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher857c-account-delete-fg648" Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.650786 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher857c-account-delete-fg648"] Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.685221 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179ebedc-6394-4aa1-a50e-5f3636af0579-operator-scripts\") pod \"watcher857c-account-delete-fg648\" (UID: \"179ebedc-6394-4aa1-a50e-5f3636af0579\") " pod="watcher-kuttl-default/watcher857c-account-delete-fg648" Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.685298 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qknbh\" (UniqueName: \"kubernetes.io/projected/179ebedc-6394-4aa1-a50e-5f3636af0579-kube-api-access-qknbh\") pod \"watcher857c-account-delete-fg648\" (UID: \"179ebedc-6394-4aa1-a50e-5f3636af0579\") " pod="watcher-kuttl-default/watcher857c-account-delete-fg648" Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.685479 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.685662 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="df93acdc-d9c1-4ba8-8628-8ec9e51d5023" containerName="watcher-applier" containerID="cri-o://3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e" gracePeriod=30 Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.739035 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.770052 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.770286 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="5ccf0f7e-6682-4f67-b164-a3b6ee53133e" containerName="watcher-kuttl-api-log" containerID="cri-o://bd0e23330a85d7d8b47fefac89db45de4ca3d83f17f2caa7956780ae089a2289" gracePeriod=30 Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.770413 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="5ccf0f7e-6682-4f67-b164-a3b6ee53133e" containerName="watcher-api" containerID="cri-o://b2084e641484dbf83348f421a2ab06e85aa32eb58b71e32a614290a82c7e2a6b" gracePeriod=30 Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.787126 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179ebedc-6394-4aa1-a50e-5f3636af0579-operator-scripts\") pod \"watcher857c-account-delete-fg648\" (UID: \"179ebedc-6394-4aa1-a50e-5f3636af0579\") " pod="watcher-kuttl-default/watcher857c-account-delete-fg648" Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.787179 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qknbh\" (UniqueName: \"kubernetes.io/projected/179ebedc-6394-4aa1-a50e-5f3636af0579-kube-api-access-qknbh\") pod \"watcher857c-account-delete-fg648\" (UID: \"179ebedc-6394-4aa1-a50e-5f3636af0579\") " pod="watcher-kuttl-default/watcher857c-account-delete-fg648" Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.787900 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179ebedc-6394-4aa1-a50e-5f3636af0579-operator-scripts\") pod \"watcher857c-account-delete-fg648\" (UID: \"179ebedc-6394-4aa1-a50e-5f3636af0579\") " pod="watcher-kuttl-default/watcher857c-account-delete-fg648" Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.829305 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qknbh\" (UniqueName: \"kubernetes.io/projected/179ebedc-6394-4aa1-a50e-5f3636af0579-kube-api-access-qknbh\") pod \"watcher857c-account-delete-fg648\" (UID: \"179ebedc-6394-4aa1-a50e-5f3636af0579\") " pod="watcher-kuttl-default/watcher857c-account-delete-fg648" Dec 02 14:51:04 crc kubenswrapper[4814]: I1202 14:51:04.981011 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher857c-account-delete-fg648" Dec 02 14:51:05 crc kubenswrapper[4814]: I1202 14:51:05.259175 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher857c-account-delete-fg648"] Dec 02 14:51:05 crc kubenswrapper[4814]: I1202 14:51:05.263459 4814 generic.go:334] "Generic (PLEG): container finished" podID="5ccf0f7e-6682-4f67-b164-a3b6ee53133e" containerID="bd0e23330a85d7d8b47fefac89db45de4ca3d83f17f2caa7956780ae089a2289" exitCode=143 Dec 02 14:51:05 crc kubenswrapper[4814]: I1202 14:51:05.263503 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5ccf0f7e-6682-4f67-b164-a3b6ee53133e","Type":"ContainerDied","Data":"bd0e23330a85d7d8b47fefac89db45de4ca3d83f17f2caa7956780ae089a2289"} Dec 02 14:51:05 crc kubenswrapper[4814]: I1202 14:51:05.266215 4814 generic.go:334] "Generic (PLEG): container finished" podID="2e291138-9070-4e85-8fbd-607f38631a50" containerID="e08266dacb23b5fd89a9832c764944b8b116f61c49f1730d47b78c3ec97c54bd" exitCode=0 Dec 02 14:51:05 crc kubenswrapper[4814]: I1202 14:51:05.266389 4814 generic.go:334] "Generic (PLEG): container finished" podID="2e291138-9070-4e85-8fbd-607f38631a50" containerID="1ae5c5d024b7a5223ecb1308c8e85a9ddead903a9d83d8d8a3717678d4b18b2e" exitCode=2 Dec 02 14:51:05 crc kubenswrapper[4814]: I1202 14:51:05.266398 4814 generic.go:334] "Generic (PLEG): container finished" podID="2e291138-9070-4e85-8fbd-607f38631a50" containerID="c2918513604157e41daf2ef051f93de29dc755ca502b6d380bbae5f8abb6670c" exitCode=0 Dec 02 14:51:05 crc kubenswrapper[4814]: I1202 14:51:05.266269 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2e291138-9070-4e85-8fbd-607f38631a50","Type":"ContainerDied","Data":"e08266dacb23b5fd89a9832c764944b8b116f61c49f1730d47b78c3ec97c54bd"} Dec 02 14:51:05 crc kubenswrapper[4814]: I1202 14:51:05.266440 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2e291138-9070-4e85-8fbd-607f38631a50","Type":"ContainerDied","Data":"1ae5c5d024b7a5223ecb1308c8e85a9ddead903a9d83d8d8a3717678d4b18b2e"} Dec 02 14:51:05 crc kubenswrapper[4814]: I1202 14:51:05.266454 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2e291138-9070-4e85-8fbd-607f38631a50","Type":"ContainerDied","Data":"c2918513604157e41daf2ef051f93de29dc755ca502b6d380bbae5f8abb6670c"} Dec 02 14:51:05 crc kubenswrapper[4814]: I1202 14:51:05.266597 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="58c4482d-bd8b-480c-b72c-a117f0c59288" containerName="watcher-decision-engine" containerID="cri-o://c3df678d115623af392b3911e1cc6af49d5662dd9c585ba89eed8aa728a0715c" gracePeriod=30 Dec 02 14:51:06 crc kubenswrapper[4814]: I1202 14:51:06.154635 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593ae4fb-ae06-4616-b78d-ba6914c1b1a5" path="/var/lib/kubelet/pods/593ae4fb-ae06-4616-b78d-ba6914c1b1a5/volumes" Dec 02 14:51:06 crc kubenswrapper[4814]: I1202 14:51:06.277788 4814 generic.go:334] "Generic (PLEG): container finished" podID="5ccf0f7e-6682-4f67-b164-a3b6ee53133e" containerID="b2084e641484dbf83348f421a2ab06e85aa32eb58b71e32a614290a82c7e2a6b" exitCode=0 Dec 02 14:51:06 crc kubenswrapper[4814]: I1202 14:51:06.277865 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5ccf0f7e-6682-4f67-b164-a3b6ee53133e","Type":"ContainerDied","Data":"b2084e641484dbf83348f421a2ab06e85aa32eb58b71e32a614290a82c7e2a6b"} Dec 02 14:51:06 crc kubenswrapper[4814]: I1202 14:51:06.279750 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher857c-account-delete-fg648" event={"ID":"179ebedc-6394-4aa1-a50e-5f3636af0579","Type":"ContainerStarted","Data":"92ecfa864356e251abb405231e6ba7adc0b8d9398318e99e2ec2ee96622585f2"} Dec 02 14:51:06 crc kubenswrapper[4814]: I1202 14:51:06.279785 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher857c-account-delete-fg648" event={"ID":"179ebedc-6394-4aa1-a50e-5f3636af0579","Type":"ContainerStarted","Data":"d89ec539db2637ca69001c3ba2764e4c17eb8bedaad69988572c24d9ab95ea6c"} Dec 02 14:51:06 crc kubenswrapper[4814]: I1202 14:51:06.298144 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher857c-account-delete-fg648" podStartSLOduration=2.29812154 podStartE2EDuration="2.29812154s" podCreationTimestamp="2025-12-02 14:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:51:06.29292738 +0000 UTC m=+1328.417289594" watchObservedRunningTime="2025-12-02 14:51:06.29812154 +0000 UTC m=+1328.422483754" Dec 02 14:51:06 crc kubenswrapper[4814]: E1202 14:51:06.842175 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 14:51:06 crc kubenswrapper[4814]: E1202 14:51:06.850151 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 14:51:06 crc kubenswrapper[4814]: E1202 14:51:06.858384 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 14:51:06 crc kubenswrapper[4814]: E1202 14:51:06.858457 4814 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="df93acdc-d9c1-4ba8-8628-8ec9e51d5023" containerName="watcher-applier" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.065582 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.123955 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-config-data\") pod \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.124122 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-custom-prometheus-ca\") pod \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.124168 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-logs\") pod \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.124197 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-combined-ca-bundle\") pod \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.124272 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhrlw\" (UniqueName: \"kubernetes.io/projected/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-kube-api-access-hhrlw\") pod \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\" (UID: \"5ccf0f7e-6682-4f67-b164-a3b6ee53133e\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.124629 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-logs" (OuterVolumeSpecName: "logs") pod "5ccf0f7e-6682-4f67-b164-a3b6ee53133e" (UID: "5ccf0f7e-6682-4f67-b164-a3b6ee53133e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.132128 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-kube-api-access-hhrlw" (OuterVolumeSpecName: "kube-api-access-hhrlw") pod "5ccf0f7e-6682-4f67-b164-a3b6ee53133e" (UID: "5ccf0f7e-6682-4f67-b164-a3b6ee53133e"). InnerVolumeSpecName "kube-api-access-hhrlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.165756 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ccf0f7e-6682-4f67-b164-a3b6ee53133e" (UID: "5ccf0f7e-6682-4f67-b164-a3b6ee53133e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.166523 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "5ccf0f7e-6682-4f67-b164-a3b6ee53133e" (UID: "5ccf0f7e-6682-4f67-b164-a3b6ee53133e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.174147 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-config-data" (OuterVolumeSpecName: "config-data") pod "5ccf0f7e-6682-4f67-b164-a3b6ee53133e" (UID: "5ccf0f7e-6682-4f67-b164-a3b6ee53133e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.226334 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.226364 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.226375 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.226387 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhrlw\" (UniqueName: \"kubernetes.io/projected/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-kube-api-access-hhrlw\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.226395 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccf0f7e-6682-4f67-b164-a3b6ee53133e-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.317455 4814 generic.go:334] "Generic (PLEG): container finished" podID="58c4482d-bd8b-480c-b72c-a117f0c59288" containerID="c3df678d115623af392b3911e1cc6af49d5662dd9c585ba89eed8aa728a0715c" exitCode=0 Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.317516 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"58c4482d-bd8b-480c-b72c-a117f0c59288","Type":"ContainerDied","Data":"c3df678d115623af392b3911e1cc6af49d5662dd9c585ba89eed8aa728a0715c"} Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.333757 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5ccf0f7e-6682-4f67-b164-a3b6ee53133e","Type":"ContainerDied","Data":"efc38813fd760e105c2a86a5a56006d332d5ed25d31a71e5801435d129df8d43"} Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.333806 4814 scope.go:117] "RemoveContainer" containerID="b2084e641484dbf83348f421a2ab06e85aa32eb58b71e32a614290a82c7e2a6b" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.333939 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.348453 4814 generic.go:334] "Generic (PLEG): container finished" podID="179ebedc-6394-4aa1-a50e-5f3636af0579" containerID="92ecfa864356e251abb405231e6ba7adc0b8d9398318e99e2ec2ee96622585f2" exitCode=0 Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.348501 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher857c-account-delete-fg648" event={"ID":"179ebedc-6394-4aa1-a50e-5f3636af0579","Type":"ContainerDied","Data":"92ecfa864356e251abb405231e6ba7adc0b8d9398318e99e2ec2ee96622585f2"} Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.377976 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.382877 4814 scope.go:117] "RemoveContainer" containerID="bd0e23330a85d7d8b47fefac89db45de4ca3d83f17f2caa7956780ae089a2289" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.388210 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.425695 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.430326 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c4482d-bd8b-480c-b72c-a117f0c59288-logs\") pod \"58c4482d-bd8b-480c-b72c-a117f0c59288\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.430371 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-custom-prometheus-ca\") pod \"58c4482d-bd8b-480c-b72c-a117f0c59288\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.430420 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8x45\" (UniqueName: \"kubernetes.io/projected/58c4482d-bd8b-480c-b72c-a117f0c59288-kube-api-access-q8x45\") pod \"58c4482d-bd8b-480c-b72c-a117f0c59288\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.430485 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-config-data\") pod \"58c4482d-bd8b-480c-b72c-a117f0c59288\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.430532 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-combined-ca-bundle\") pod \"58c4482d-bd8b-480c-b72c-a117f0c59288\" (UID: \"58c4482d-bd8b-480c-b72c-a117f0c59288\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.434601 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58c4482d-bd8b-480c-b72c-a117f0c59288-logs" (OuterVolumeSpecName: "logs") pod "58c4482d-bd8b-480c-b72c-a117f0c59288" (UID: "58c4482d-bd8b-480c-b72c-a117f0c59288"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.437359 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c4482d-bd8b-480c-b72c-a117f0c59288-kube-api-access-q8x45" (OuterVolumeSpecName: "kube-api-access-q8x45") pod "58c4482d-bd8b-480c-b72c-a117f0c59288" (UID: "58c4482d-bd8b-480c-b72c-a117f0c59288"). InnerVolumeSpecName "kube-api-access-q8x45". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.476847 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58c4482d-bd8b-480c-b72c-a117f0c59288" (UID: "58c4482d-bd8b-480c-b72c-a117f0c59288"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.501433 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "58c4482d-bd8b-480c-b72c-a117f0c59288" (UID: "58c4482d-bd8b-480c-b72c-a117f0c59288"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.508131 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-config-data" (OuterVolumeSpecName: "config-data") pod "58c4482d-bd8b-480c-b72c-a117f0c59288" (UID: "58c4482d-bd8b-480c-b72c-a117f0c59288"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.532089 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8x45\" (UniqueName: \"kubernetes.io/projected/58c4482d-bd8b-480c-b72c-a117f0c59288-kube-api-access-q8x45\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.532164 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.532178 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.532187 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c4482d-bd8b-480c-b72c-a117f0c59288-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.532196 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58c4482d-bd8b-480c-b72c-a117f0c59288-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.821754 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.836272 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-ceilometer-tls-certs\") pod \"2e291138-9070-4e85-8fbd-607f38631a50\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.836322 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e291138-9070-4e85-8fbd-607f38631a50-log-httpd\") pod \"2e291138-9070-4e85-8fbd-607f38631a50\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.836349 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-scripts\") pod \"2e291138-9070-4e85-8fbd-607f38631a50\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.836413 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-sg-core-conf-yaml\") pod \"2e291138-9070-4e85-8fbd-607f38631a50\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.836471 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpfj8\" (UniqueName: \"kubernetes.io/projected/2e291138-9070-4e85-8fbd-607f38631a50-kube-api-access-vpfj8\") pod \"2e291138-9070-4e85-8fbd-607f38631a50\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.836510 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e291138-9070-4e85-8fbd-607f38631a50-run-httpd\") pod \"2e291138-9070-4e85-8fbd-607f38631a50\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.836555 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-config-data\") pod \"2e291138-9070-4e85-8fbd-607f38631a50\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.836643 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-combined-ca-bundle\") pod \"2e291138-9070-4e85-8fbd-607f38631a50\" (UID: \"2e291138-9070-4e85-8fbd-607f38631a50\") " Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.836978 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e291138-9070-4e85-8fbd-607f38631a50-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2e291138-9070-4e85-8fbd-607f38631a50" (UID: "2e291138-9070-4e85-8fbd-607f38631a50"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.837032 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e291138-9070-4e85-8fbd-607f38631a50-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2e291138-9070-4e85-8fbd-607f38631a50" (UID: "2e291138-9070-4e85-8fbd-607f38631a50"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.839385 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-scripts" (OuterVolumeSpecName: "scripts") pod "2e291138-9070-4e85-8fbd-607f38631a50" (UID: "2e291138-9070-4e85-8fbd-607f38631a50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.839694 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e291138-9070-4e85-8fbd-607f38631a50-kube-api-access-vpfj8" (OuterVolumeSpecName: "kube-api-access-vpfj8") pod "2e291138-9070-4e85-8fbd-607f38631a50" (UID: "2e291138-9070-4e85-8fbd-607f38631a50"). InnerVolumeSpecName "kube-api-access-vpfj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.883841 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2e291138-9070-4e85-8fbd-607f38631a50" (UID: "2e291138-9070-4e85-8fbd-607f38631a50"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.896869 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2e291138-9070-4e85-8fbd-607f38631a50" (UID: "2e291138-9070-4e85-8fbd-607f38631a50"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.907975 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e291138-9070-4e85-8fbd-607f38631a50" (UID: "2e291138-9070-4e85-8fbd-607f38631a50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.934178 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-config-data" (OuterVolumeSpecName: "config-data") pod "2e291138-9070-4e85-8fbd-607f38631a50" (UID: "2e291138-9070-4e85-8fbd-607f38631a50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.938164 4814 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.938298 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpfj8\" (UniqueName: \"kubernetes.io/projected/2e291138-9070-4e85-8fbd-607f38631a50-kube-api-access-vpfj8\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.938389 4814 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e291138-9070-4e85-8fbd-607f38631a50-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.938454 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.938515 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.938567 4814 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.938635 4814 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e291138-9070-4e85-8fbd-607f38631a50-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:07 crc kubenswrapper[4814]: I1202 14:51:07.938701 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e291138-9070-4e85-8fbd-607f38631a50-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.157060 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ccf0f7e-6682-4f67-b164-a3b6ee53133e" path="/var/lib/kubelet/pods/5ccf0f7e-6682-4f67-b164-a3b6ee53133e/volumes" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.364417 4814 generic.go:334] "Generic (PLEG): container finished" podID="2e291138-9070-4e85-8fbd-607f38631a50" containerID="5d0caabe50f40f1cc187021f4565cc98919a3db5085fe05ef6894e43e98e885b" exitCode=0 Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.364563 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2e291138-9070-4e85-8fbd-607f38631a50","Type":"ContainerDied","Data":"5d0caabe50f40f1cc187021f4565cc98919a3db5085fe05ef6894e43e98e885b"} Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.364610 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2e291138-9070-4e85-8fbd-607f38631a50","Type":"ContainerDied","Data":"4eab2a0aaea3f0a5322f8c5b441a662912dd2804c716c9d73f52fc1b4b12a7d7"} Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.364639 4814 scope.go:117] "RemoveContainer" containerID="e08266dacb23b5fd89a9832c764944b8b116f61c49f1730d47b78c3ec97c54bd" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.364837 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.373411 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"58c4482d-bd8b-480c-b72c-a117f0c59288","Type":"ContainerDied","Data":"d006757e3bed6d7d02d50d8b0ccc9bdf88ed59dd6653e0e38a7099468ccba18e"} Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.373428 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.421555 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.436625 4814 scope.go:117] "RemoveContainer" containerID="1ae5c5d024b7a5223ecb1308c8e85a9ddead903a9d83d8d8a3717678d4b18b2e" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.444182 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.466437 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.474378 4814 scope.go:117] "RemoveContainer" containerID="5d0caabe50f40f1cc187021f4565cc98919a3db5085fe05ef6894e43e98e885b" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.480303 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.487592 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:08 crc kubenswrapper[4814]: E1202 14:51:08.487983 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="proxy-httpd" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.488005 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="proxy-httpd" Dec 02 14:51:08 crc kubenswrapper[4814]: E1202 14:51:08.488023 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c4482d-bd8b-480c-b72c-a117f0c59288" containerName="watcher-decision-engine" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.488031 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c4482d-bd8b-480c-b72c-a117f0c59288" containerName="watcher-decision-engine" Dec 02 14:51:08 crc kubenswrapper[4814]: E1202 14:51:08.488054 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="sg-core" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.488061 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="sg-core" Dec 02 14:51:08 crc kubenswrapper[4814]: E1202 14:51:08.488078 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ccf0f7e-6682-4f67-b164-a3b6ee53133e" containerName="watcher-kuttl-api-log" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.488086 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccf0f7e-6682-4f67-b164-a3b6ee53133e" containerName="watcher-kuttl-api-log" Dec 02 14:51:08 crc kubenswrapper[4814]: E1202 14:51:08.488101 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="ceilometer-notification-agent" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.488109 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="ceilometer-notification-agent" Dec 02 14:51:08 crc kubenswrapper[4814]: E1202 14:51:08.488123 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="ceilometer-central-agent" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.488132 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="ceilometer-central-agent" Dec 02 14:51:08 crc kubenswrapper[4814]: E1202 14:51:08.488142 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ccf0f7e-6682-4f67-b164-a3b6ee53133e" containerName="watcher-api" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.488149 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccf0f7e-6682-4f67-b164-a3b6ee53133e" containerName="watcher-api" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.488343 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ccf0f7e-6682-4f67-b164-a3b6ee53133e" containerName="watcher-api" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.488355 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c4482d-bd8b-480c-b72c-a117f0c59288" containerName="watcher-decision-engine" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.488402 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="sg-core" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.488413 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="proxy-httpd" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.488441 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="ceilometer-central-agent" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.488463 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e291138-9070-4e85-8fbd-607f38631a50" containerName="ceilometer-notification-agent" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.488484 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ccf0f7e-6682-4f67-b164-a3b6ee53133e" containerName="watcher-kuttl-api-log" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.490600 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.493692 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.495863 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.496568 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.497012 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.515439 4814 scope.go:117] "RemoveContainer" containerID="c2918513604157e41daf2ef051f93de29dc755ca502b6d380bbae5f8abb6670c" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.539162 4814 scope.go:117] "RemoveContainer" containerID="e08266dacb23b5fd89a9832c764944b8b116f61c49f1730d47b78c3ec97c54bd" Dec 02 14:51:08 crc kubenswrapper[4814]: E1202 14:51:08.539628 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e08266dacb23b5fd89a9832c764944b8b116f61c49f1730d47b78c3ec97c54bd\": container with ID starting with e08266dacb23b5fd89a9832c764944b8b116f61c49f1730d47b78c3ec97c54bd not found: ID does not exist" containerID="e08266dacb23b5fd89a9832c764944b8b116f61c49f1730d47b78c3ec97c54bd" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.539697 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e08266dacb23b5fd89a9832c764944b8b116f61c49f1730d47b78c3ec97c54bd"} err="failed to get container status \"e08266dacb23b5fd89a9832c764944b8b116f61c49f1730d47b78c3ec97c54bd\": rpc error: code = NotFound desc = could not find container \"e08266dacb23b5fd89a9832c764944b8b116f61c49f1730d47b78c3ec97c54bd\": container with ID starting with e08266dacb23b5fd89a9832c764944b8b116f61c49f1730d47b78c3ec97c54bd not found: ID does not exist" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.539717 4814 scope.go:117] "RemoveContainer" containerID="1ae5c5d024b7a5223ecb1308c8e85a9ddead903a9d83d8d8a3717678d4b18b2e" Dec 02 14:51:08 crc kubenswrapper[4814]: E1202 14:51:08.539995 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ae5c5d024b7a5223ecb1308c8e85a9ddead903a9d83d8d8a3717678d4b18b2e\": container with ID starting with 1ae5c5d024b7a5223ecb1308c8e85a9ddead903a9d83d8d8a3717678d4b18b2e not found: ID does not exist" containerID="1ae5c5d024b7a5223ecb1308c8e85a9ddead903a9d83d8d8a3717678d4b18b2e" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.540042 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae5c5d024b7a5223ecb1308c8e85a9ddead903a9d83d8d8a3717678d4b18b2e"} err="failed to get container status \"1ae5c5d024b7a5223ecb1308c8e85a9ddead903a9d83d8d8a3717678d4b18b2e\": rpc error: code = NotFound desc = could not find container \"1ae5c5d024b7a5223ecb1308c8e85a9ddead903a9d83d8d8a3717678d4b18b2e\": container with ID starting with 1ae5c5d024b7a5223ecb1308c8e85a9ddead903a9d83d8d8a3717678d4b18b2e not found: ID does not exist" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.540080 4814 scope.go:117] "RemoveContainer" containerID="5d0caabe50f40f1cc187021f4565cc98919a3db5085fe05ef6894e43e98e885b" Dec 02 14:51:08 crc kubenswrapper[4814]: E1202 14:51:08.540616 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d0caabe50f40f1cc187021f4565cc98919a3db5085fe05ef6894e43e98e885b\": container with ID starting with 5d0caabe50f40f1cc187021f4565cc98919a3db5085fe05ef6894e43e98e885b not found: ID does not exist" containerID="5d0caabe50f40f1cc187021f4565cc98919a3db5085fe05ef6894e43e98e885b" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.540643 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0caabe50f40f1cc187021f4565cc98919a3db5085fe05ef6894e43e98e885b"} err="failed to get container status \"5d0caabe50f40f1cc187021f4565cc98919a3db5085fe05ef6894e43e98e885b\": rpc error: code = NotFound desc = could not find container \"5d0caabe50f40f1cc187021f4565cc98919a3db5085fe05ef6894e43e98e885b\": container with ID starting with 5d0caabe50f40f1cc187021f4565cc98919a3db5085fe05ef6894e43e98e885b not found: ID does not exist" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.540658 4814 scope.go:117] "RemoveContainer" containerID="c2918513604157e41daf2ef051f93de29dc755ca502b6d380bbae5f8abb6670c" Dec 02 14:51:08 crc kubenswrapper[4814]: E1202 14:51:08.540919 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2918513604157e41daf2ef051f93de29dc755ca502b6d380bbae5f8abb6670c\": container with ID starting with c2918513604157e41daf2ef051f93de29dc755ca502b6d380bbae5f8abb6670c not found: ID does not exist" containerID="c2918513604157e41daf2ef051f93de29dc755ca502b6d380bbae5f8abb6670c" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.540949 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2918513604157e41daf2ef051f93de29dc755ca502b6d380bbae5f8abb6670c"} err="failed to get container status \"c2918513604157e41daf2ef051f93de29dc755ca502b6d380bbae5f8abb6670c\": rpc error: code = NotFound desc = could not find container \"c2918513604157e41daf2ef051f93de29dc755ca502b6d380bbae5f8abb6670c\": container with ID starting with c2918513604157e41daf2ef051f93de29dc755ca502b6d380bbae5f8abb6670c not found: ID does not exist" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.540992 4814 scope.go:117] "RemoveContainer" containerID="c3df678d115623af392b3911e1cc6af49d5662dd9c585ba89eed8aa728a0715c" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.648593 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-config-data\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.648811 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-scripts\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.648876 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdrhj\" (UniqueName: \"kubernetes.io/projected/b91ad288-708b-4cd8-9dc9-b2530b628c9c-kube-api-access-xdrhj\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.650160 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b91ad288-708b-4cd8-9dc9-b2530b628c9c-run-httpd\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.650269 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.650335 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.650374 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.650427 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b91ad288-708b-4cd8-9dc9-b2530b628c9c-log-httpd\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.742926 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher857c-account-delete-fg648" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.751559 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179ebedc-6394-4aa1-a50e-5f3636af0579-operator-scripts\") pod \"179ebedc-6394-4aa1-a50e-5f3636af0579\" (UID: \"179ebedc-6394-4aa1-a50e-5f3636af0579\") " Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.751615 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qknbh\" (UniqueName: \"kubernetes.io/projected/179ebedc-6394-4aa1-a50e-5f3636af0579-kube-api-access-qknbh\") pod \"179ebedc-6394-4aa1-a50e-5f3636af0579\" (UID: \"179ebedc-6394-4aa1-a50e-5f3636af0579\") " Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.751754 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.751773 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b91ad288-708b-4cd8-9dc9-b2530b628c9c-log-httpd\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.751810 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-config-data\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.751825 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdrhj\" (UniqueName: \"kubernetes.io/projected/b91ad288-708b-4cd8-9dc9-b2530b628c9c-kube-api-access-xdrhj\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.751841 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-scripts\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.751883 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b91ad288-708b-4cd8-9dc9-b2530b628c9c-run-httpd\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.751924 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.751959 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.757942 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-config-data\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.758753 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/179ebedc-6394-4aa1-a50e-5f3636af0579-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "179ebedc-6394-4aa1-a50e-5f3636af0579" (UID: "179ebedc-6394-4aa1-a50e-5f3636af0579"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.759375 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b91ad288-708b-4cd8-9dc9-b2530b628c9c-log-httpd\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.760222 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b91ad288-708b-4cd8-9dc9-b2530b628c9c-run-httpd\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.760632 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.761873 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-scripts\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.766138 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/179ebedc-6394-4aa1-a50e-5f3636af0579-kube-api-access-qknbh" (OuterVolumeSpecName: "kube-api-access-qknbh") pod "179ebedc-6394-4aa1-a50e-5f3636af0579" (UID: "179ebedc-6394-4aa1-a50e-5f3636af0579"). InnerVolumeSpecName "kube-api-access-qknbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.766716 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.769169 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.783590 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdrhj\" (UniqueName: \"kubernetes.io/projected/b91ad288-708b-4cd8-9dc9-b2530b628c9c-kube-api-access-xdrhj\") pod \"ceilometer-0\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.812055 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.853178 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179ebedc-6394-4aa1-a50e-5f3636af0579-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:08 crc kubenswrapper[4814]: I1202 14:51:08.853501 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qknbh\" (UniqueName: \"kubernetes.io/projected/179ebedc-6394-4aa1-a50e-5f3636af0579-kube-api-access-qknbh\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:09 crc kubenswrapper[4814]: I1202 14:51:09.299336 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:09 crc kubenswrapper[4814]: W1202 14:51:09.305955 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb91ad288_708b_4cd8_9dc9_b2530b628c9c.slice/crio-dd0bb3445452a472b1f7d3c70be9a554a5b97bd7af1f093c2d88e30742078a10 WatchSource:0}: Error finding container dd0bb3445452a472b1f7d3c70be9a554a5b97bd7af1f093c2d88e30742078a10: Status 404 returned error can't find the container with id dd0bb3445452a472b1f7d3c70be9a554a5b97bd7af1f093c2d88e30742078a10 Dec 02 14:51:09 crc kubenswrapper[4814]: I1202 14:51:09.391262 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher857c-account-delete-fg648" event={"ID":"179ebedc-6394-4aa1-a50e-5f3636af0579","Type":"ContainerDied","Data":"d89ec539db2637ca69001c3ba2764e4c17eb8bedaad69988572c24d9ab95ea6c"} Dec 02 14:51:09 crc kubenswrapper[4814]: I1202 14:51:09.391277 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher857c-account-delete-fg648" Dec 02 14:51:09 crc kubenswrapper[4814]: I1202 14:51:09.391323 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d89ec539db2637ca69001c3ba2764e4c17eb8bedaad69988572c24d9ab95ea6c" Dec 02 14:51:09 crc kubenswrapper[4814]: I1202 14:51:09.393418 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b91ad288-708b-4cd8-9dc9-b2530b628c9c","Type":"ContainerStarted","Data":"dd0bb3445452a472b1f7d3c70be9a554a5b97bd7af1f093c2d88e30742078a10"} Dec 02 14:51:09 crc kubenswrapper[4814]: I1202 14:51:09.636571 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-ftgh5"] Dec 02 14:51:09 crc kubenswrapper[4814]: I1202 14:51:09.644991 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-ftgh5"] Dec 02 14:51:09 crc kubenswrapper[4814]: I1202 14:51:09.650966 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher857c-account-delete-fg648"] Dec 02 14:51:09 crc kubenswrapper[4814]: I1202 14:51:09.656406 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-857c-account-create-update-62bvp"] Dec 02 14:51:09 crc kubenswrapper[4814]: I1202 14:51:09.661802 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher857c-account-delete-fg648"] Dec 02 14:51:09 crc kubenswrapper[4814]: I1202 14:51:09.666694 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-857c-account-create-update-62bvp"] Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.078479 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.095043 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-combined-ca-bundle\") pod \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.095087 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-config-data\") pod \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.095142 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-logs\") pod \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.095196 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k7bx\" (UniqueName: \"kubernetes.io/projected/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-kube-api-access-2k7bx\") pod \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\" (UID: \"df93acdc-d9c1-4ba8-8628-8ec9e51d5023\") " Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.095660 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-logs" (OuterVolumeSpecName: "logs") pod "df93acdc-d9c1-4ba8-8628-8ec9e51d5023" (UID: "df93acdc-d9c1-4ba8-8628-8ec9e51d5023"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.105458 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-kube-api-access-2k7bx" (OuterVolumeSpecName: "kube-api-access-2k7bx") pod "df93acdc-d9c1-4ba8-8628-8ec9e51d5023" (UID: "df93acdc-d9c1-4ba8-8628-8ec9e51d5023"). InnerVolumeSpecName "kube-api-access-2k7bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.124384 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df93acdc-d9c1-4ba8-8628-8ec9e51d5023" (UID: "df93acdc-d9c1-4ba8-8628-8ec9e51d5023"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.152369 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-config-data" (OuterVolumeSpecName: "config-data") pod "df93acdc-d9c1-4ba8-8628-8ec9e51d5023" (UID: "df93acdc-d9c1-4ba8-8628-8ec9e51d5023"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.155108 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="179ebedc-6394-4aa1-a50e-5f3636af0579" path="/var/lib/kubelet/pods/179ebedc-6394-4aa1-a50e-5f3636af0579/volumes" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.155918 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e291138-9070-4e85-8fbd-607f38631a50" path="/var/lib/kubelet/pods/2e291138-9070-4e85-8fbd-607f38631a50/volumes" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.157335 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e73995-fe53-4756-81da-3c05cda569f8" path="/var/lib/kubelet/pods/45e73995-fe53-4756-81da-3c05cda569f8/volumes" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.158691 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58c4482d-bd8b-480c-b72c-a117f0c59288" path="/var/lib/kubelet/pods/58c4482d-bd8b-480c-b72c-a117f0c59288/volumes" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.159370 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde7a121-3624-453e-9613-df325f320c51" path="/var/lib/kubelet/pods/bde7a121-3624-453e-9613-df325f320c51/volumes" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.198651 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.198676 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.198684 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.198695 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k7bx\" (UniqueName: \"kubernetes.io/projected/df93acdc-d9c1-4ba8-8628-8ec9e51d5023-kube-api-access-2k7bx\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.405910 4814 generic.go:334] "Generic (PLEG): container finished" podID="df93acdc-d9c1-4ba8-8628-8ec9e51d5023" containerID="3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e" exitCode=0 Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.406100 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"df93acdc-d9c1-4ba8-8628-8ec9e51d5023","Type":"ContainerDied","Data":"3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e"} Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.407127 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"df93acdc-d9c1-4ba8-8628-8ec9e51d5023","Type":"ContainerDied","Data":"41bebffbb65944eb210dff7da2d6670b1b631edfe767988967278b5a8177b790"} Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.406153 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.407190 4814 scope.go:117] "RemoveContainer" containerID="3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.409215 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b91ad288-708b-4cd8-9dc9-b2530b628c9c","Type":"ContainerStarted","Data":"1407e37671a45e01b1a405f44a7220fddf69396b0b48de02d8b27b5b021421b3"} Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.428372 4814 scope.go:117] "RemoveContainer" containerID="3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e" Dec 02 14:51:10 crc kubenswrapper[4814]: E1202 14:51:10.428867 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e\": container with ID starting with 3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e not found: ID does not exist" containerID="3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.428906 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e"} err="failed to get container status \"3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e\": rpc error: code = NotFound desc = could not find container \"3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e\": container with ID starting with 3439b46ab2bfc5454e11e928fec16ff750fb628f505a4198743940a9a5fec09e not found: ID does not exist" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.429204 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.444343 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.852567 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-j4cvr"] Dec 02 14:51:10 crc kubenswrapper[4814]: E1202 14:51:10.852901 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179ebedc-6394-4aa1-a50e-5f3636af0579" containerName="mariadb-account-delete" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.852916 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="179ebedc-6394-4aa1-a50e-5f3636af0579" containerName="mariadb-account-delete" Dec 02 14:51:10 crc kubenswrapper[4814]: E1202 14:51:10.852944 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df93acdc-d9c1-4ba8-8628-8ec9e51d5023" containerName="watcher-applier" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.852950 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="df93acdc-d9c1-4ba8-8628-8ec9e51d5023" containerName="watcher-applier" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.853091 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="179ebedc-6394-4aa1-a50e-5f3636af0579" containerName="mariadb-account-delete" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.853132 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="df93acdc-d9c1-4ba8-8628-8ec9e51d5023" containerName="watcher-applier" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.853776 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-j4cvr" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.863541 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-j4cvr"] Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.870673 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr"] Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.881051 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.882988 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 14:51:10 crc kubenswrapper[4814]: I1202 14:51:10.892759 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr"] Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.015540 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8875ebe-e290-4e4d-995e-1c3ddb5ed526-operator-scripts\") pod \"watcher-db-create-j4cvr\" (UID: \"d8875ebe-e290-4e4d-995e-1c3ddb5ed526\") " pod="watcher-kuttl-default/watcher-db-create-j4cvr" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.015599 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhts2\" (UniqueName: \"kubernetes.io/projected/8c6b7a74-3bde-4eb4-816b-97124f6bd079-kube-api-access-lhts2\") pod \"watcher-bb33-account-create-update-9sfxr\" (UID: \"8c6b7a74-3bde-4eb4-816b-97124f6bd079\") " pod="watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.015628 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c6b7a74-3bde-4eb4-816b-97124f6bd079-operator-scripts\") pod \"watcher-bb33-account-create-update-9sfxr\" (UID: \"8c6b7a74-3bde-4eb4-816b-97124f6bd079\") " pod="watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.015685 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q7nm\" (UniqueName: \"kubernetes.io/projected/d8875ebe-e290-4e4d-995e-1c3ddb5ed526-kube-api-access-6q7nm\") pod \"watcher-db-create-j4cvr\" (UID: \"d8875ebe-e290-4e4d-995e-1c3ddb5ed526\") " pod="watcher-kuttl-default/watcher-db-create-j4cvr" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.117463 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8875ebe-e290-4e4d-995e-1c3ddb5ed526-operator-scripts\") pod \"watcher-db-create-j4cvr\" (UID: \"d8875ebe-e290-4e4d-995e-1c3ddb5ed526\") " pod="watcher-kuttl-default/watcher-db-create-j4cvr" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.117550 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhts2\" (UniqueName: \"kubernetes.io/projected/8c6b7a74-3bde-4eb4-816b-97124f6bd079-kube-api-access-lhts2\") pod \"watcher-bb33-account-create-update-9sfxr\" (UID: \"8c6b7a74-3bde-4eb4-816b-97124f6bd079\") " pod="watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.117583 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c6b7a74-3bde-4eb4-816b-97124f6bd079-operator-scripts\") pod \"watcher-bb33-account-create-update-9sfxr\" (UID: \"8c6b7a74-3bde-4eb4-816b-97124f6bd079\") " pod="watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.117641 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q7nm\" (UniqueName: \"kubernetes.io/projected/d8875ebe-e290-4e4d-995e-1c3ddb5ed526-kube-api-access-6q7nm\") pod \"watcher-db-create-j4cvr\" (UID: \"d8875ebe-e290-4e4d-995e-1c3ddb5ed526\") " pod="watcher-kuttl-default/watcher-db-create-j4cvr" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.118353 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c6b7a74-3bde-4eb4-816b-97124f6bd079-operator-scripts\") pod \"watcher-bb33-account-create-update-9sfxr\" (UID: \"8c6b7a74-3bde-4eb4-816b-97124f6bd079\") " pod="watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.118369 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8875ebe-e290-4e4d-995e-1c3ddb5ed526-operator-scripts\") pod \"watcher-db-create-j4cvr\" (UID: \"d8875ebe-e290-4e4d-995e-1c3ddb5ed526\") " pod="watcher-kuttl-default/watcher-db-create-j4cvr" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.132156 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhts2\" (UniqueName: \"kubernetes.io/projected/8c6b7a74-3bde-4eb4-816b-97124f6bd079-kube-api-access-lhts2\") pod \"watcher-bb33-account-create-update-9sfxr\" (UID: \"8c6b7a74-3bde-4eb4-816b-97124f6bd079\") " pod="watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.136612 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q7nm\" (UniqueName: \"kubernetes.io/projected/d8875ebe-e290-4e4d-995e-1c3ddb5ed526-kube-api-access-6q7nm\") pod \"watcher-db-create-j4cvr\" (UID: \"d8875ebe-e290-4e4d-995e-1c3ddb5ed526\") " pod="watcher-kuttl-default/watcher-db-create-j4cvr" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.168211 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-j4cvr" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.195855 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.439629 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b91ad288-708b-4cd8-9dc9-b2530b628c9c","Type":"ContainerStarted","Data":"e81fd6ae2e8b2015243c654253ef244efac61dda98b120dc2a6c5a67e00b5d5e"} Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.650857 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-j4cvr"] Dec 02 14:51:11 crc kubenswrapper[4814]: W1202 14:51:11.656013 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8875ebe_e290_4e4d_995e_1c3ddb5ed526.slice/crio-205db4ffc295f14f2ab0ac0c25d69e33104b9ebc1faff0cb7ddfa8eb25fe96d1 WatchSource:0}: Error finding container 205db4ffc295f14f2ab0ac0c25d69e33104b9ebc1faff0cb7ddfa8eb25fe96d1: Status 404 returned error can't find the container with id 205db4ffc295f14f2ab0ac0c25d69e33104b9ebc1faff0cb7ddfa8eb25fe96d1 Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.748898 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr"] Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.814145 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="5ccf0f7e-6682-4f67-b164-a3b6ee53133e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.132:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:51:11 crc kubenswrapper[4814]: I1202 14:51:11.814209 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="5ccf0f7e-6682-4f67-b164-a3b6ee53133e" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.132:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:51:12 crc kubenswrapper[4814]: I1202 14:51:12.154030 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df93acdc-d9c1-4ba8-8628-8ec9e51d5023" path="/var/lib/kubelet/pods/df93acdc-d9c1-4ba8-8628-8ec9e51d5023/volumes" Dec 02 14:51:12 crc kubenswrapper[4814]: I1202 14:51:12.451745 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b91ad288-708b-4cd8-9dc9-b2530b628c9c","Type":"ContainerStarted","Data":"b2b693bec5c3a6178a472950c827d5a8ec466715f81677f8c440673c969f153c"} Dec 02 14:51:12 crc kubenswrapper[4814]: I1202 14:51:12.454054 4814 generic.go:334] "Generic (PLEG): container finished" podID="8c6b7a74-3bde-4eb4-816b-97124f6bd079" containerID="c9856a59876bf8a7941f35988ebea958a5093876e0e732887365d9a167652c22" exitCode=0 Dec 02 14:51:12 crc kubenswrapper[4814]: I1202 14:51:12.454114 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr" event={"ID":"8c6b7a74-3bde-4eb4-816b-97124f6bd079","Type":"ContainerDied","Data":"c9856a59876bf8a7941f35988ebea958a5093876e0e732887365d9a167652c22"} Dec 02 14:51:12 crc kubenswrapper[4814]: I1202 14:51:12.454134 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr" event={"ID":"8c6b7a74-3bde-4eb4-816b-97124f6bd079","Type":"ContainerStarted","Data":"1830c0df40d273d62dc608f6b89f5d13643922c48ccab4c6fdc8f24c28807603"} Dec 02 14:51:12 crc kubenswrapper[4814]: I1202 14:51:12.455354 4814 generic.go:334] "Generic (PLEG): container finished" podID="d8875ebe-e290-4e4d-995e-1c3ddb5ed526" containerID="82e01a2287fc71855bf41ba1dd4aca09709345c5a0e2ed867bc2cfa9b9f7db38" exitCode=0 Dec 02 14:51:12 crc kubenswrapper[4814]: I1202 14:51:12.455395 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-j4cvr" event={"ID":"d8875ebe-e290-4e4d-995e-1c3ddb5ed526","Type":"ContainerDied","Data":"82e01a2287fc71855bf41ba1dd4aca09709345c5a0e2ed867bc2cfa9b9f7db38"} Dec 02 14:51:12 crc kubenswrapper[4814]: I1202 14:51:12.455420 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-j4cvr" event={"ID":"d8875ebe-e290-4e4d-995e-1c3ddb5ed526","Type":"ContainerStarted","Data":"205db4ffc295f14f2ab0ac0c25d69e33104b9ebc1faff0cb7ddfa8eb25fe96d1"} Dec 02 14:51:13 crc kubenswrapper[4814]: I1202 14:51:13.967198 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-j4cvr" Dec 02 14:51:13 crc kubenswrapper[4814]: I1202 14:51:13.974829 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr" Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.081268 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q7nm\" (UniqueName: \"kubernetes.io/projected/d8875ebe-e290-4e4d-995e-1c3ddb5ed526-kube-api-access-6q7nm\") pod \"d8875ebe-e290-4e4d-995e-1c3ddb5ed526\" (UID: \"d8875ebe-e290-4e4d-995e-1c3ddb5ed526\") " Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.081367 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhts2\" (UniqueName: \"kubernetes.io/projected/8c6b7a74-3bde-4eb4-816b-97124f6bd079-kube-api-access-lhts2\") pod \"8c6b7a74-3bde-4eb4-816b-97124f6bd079\" (UID: \"8c6b7a74-3bde-4eb4-816b-97124f6bd079\") " Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.081406 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8875ebe-e290-4e4d-995e-1c3ddb5ed526-operator-scripts\") pod \"d8875ebe-e290-4e4d-995e-1c3ddb5ed526\" (UID: \"d8875ebe-e290-4e4d-995e-1c3ddb5ed526\") " Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.081542 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c6b7a74-3bde-4eb4-816b-97124f6bd079-operator-scripts\") pod \"8c6b7a74-3bde-4eb4-816b-97124f6bd079\" (UID: \"8c6b7a74-3bde-4eb4-816b-97124f6bd079\") " Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.083297 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8875ebe-e290-4e4d-995e-1c3ddb5ed526-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8875ebe-e290-4e4d-995e-1c3ddb5ed526" (UID: "d8875ebe-e290-4e4d-995e-1c3ddb5ed526"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.083341 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c6b7a74-3bde-4eb4-816b-97124f6bd079-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c6b7a74-3bde-4eb4-816b-97124f6bd079" (UID: "8c6b7a74-3bde-4eb4-816b-97124f6bd079"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.089188 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c6b7a74-3bde-4eb4-816b-97124f6bd079-kube-api-access-lhts2" (OuterVolumeSpecName: "kube-api-access-lhts2") pod "8c6b7a74-3bde-4eb4-816b-97124f6bd079" (UID: "8c6b7a74-3bde-4eb4-816b-97124f6bd079"). InnerVolumeSpecName "kube-api-access-lhts2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.089258 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8875ebe-e290-4e4d-995e-1c3ddb5ed526-kube-api-access-6q7nm" (OuterVolumeSpecName: "kube-api-access-6q7nm") pod "d8875ebe-e290-4e4d-995e-1c3ddb5ed526" (UID: "d8875ebe-e290-4e4d-995e-1c3ddb5ed526"). InnerVolumeSpecName "kube-api-access-6q7nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.183330 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c6b7a74-3bde-4eb4-816b-97124f6bd079-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.183553 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q7nm\" (UniqueName: \"kubernetes.io/projected/d8875ebe-e290-4e4d-995e-1c3ddb5ed526-kube-api-access-6q7nm\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.183574 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhts2\" (UniqueName: \"kubernetes.io/projected/8c6b7a74-3bde-4eb4-816b-97124f6bd079-kube-api-access-lhts2\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.183589 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8875ebe-e290-4e4d-995e-1c3ddb5ed526-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.482383 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr" Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.482410 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr" event={"ID":"8c6b7a74-3bde-4eb4-816b-97124f6bd079","Type":"ContainerDied","Data":"1830c0df40d273d62dc608f6b89f5d13643922c48ccab4c6fdc8f24c28807603"} Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.482455 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1830c0df40d273d62dc608f6b89f5d13643922c48ccab4c6fdc8f24c28807603" Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.486939 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-j4cvr" event={"ID":"d8875ebe-e290-4e4d-995e-1c3ddb5ed526","Type":"ContainerDied","Data":"205db4ffc295f14f2ab0ac0c25d69e33104b9ebc1faff0cb7ddfa8eb25fe96d1"} Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.486970 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="205db4ffc295f14f2ab0ac0c25d69e33104b9ebc1faff0cb7ddfa8eb25fe96d1" Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.487087 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-j4cvr" Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.494329 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b91ad288-708b-4cd8-9dc9-b2530b628c9c","Type":"ContainerStarted","Data":"dd4b6c30cf65f001dad3a6c31a2f44afe9c136770483d949a1f164ed1c327772"} Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.494720 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:14 crc kubenswrapper[4814]: I1202 14:51:14.534884 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.275168896 podStartE2EDuration="6.534861141s" podCreationTimestamp="2025-12-02 14:51:08 +0000 UTC" firstStartedPulling="2025-12-02 14:51:09.308843265 +0000 UTC m=+1331.433205469" lastFinishedPulling="2025-12-02 14:51:13.5685355 +0000 UTC m=+1335.692897714" observedRunningTime="2025-12-02 14:51:14.523504947 +0000 UTC m=+1336.647867201" watchObservedRunningTime="2025-12-02 14:51:14.534861141 +0000 UTC m=+1336.659223355" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.136655 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-q62cx"] Dec 02 14:51:16 crc kubenswrapper[4814]: E1202 14:51:16.137183 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8875ebe-e290-4e4d-995e-1c3ddb5ed526" containerName="mariadb-database-create" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.137195 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8875ebe-e290-4e4d-995e-1c3ddb5ed526" containerName="mariadb-database-create" Dec 02 14:51:16 crc kubenswrapper[4814]: E1202 14:51:16.137212 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6b7a74-3bde-4eb4-816b-97124f6bd079" containerName="mariadb-account-create-update" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.137218 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6b7a74-3bde-4eb4-816b-97124f6bd079" containerName="mariadb-account-create-update" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.137428 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8875ebe-e290-4e4d-995e-1c3ddb5ed526" containerName="mariadb-database-create" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.137445 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6b7a74-3bde-4eb4-816b-97124f6bd079" containerName="mariadb-account-create-update" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.138152 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.140429 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.141311 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-vgls7" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.157333 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-q62cx"] Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.322363 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-config-data\") pod \"watcher-kuttl-db-sync-q62cx\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.322421 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcmtg\" (UniqueName: \"kubernetes.io/projected/095d1d01-280c-4be3-b49b-c9e25260ad69-kube-api-access-gcmtg\") pod \"watcher-kuttl-db-sync-q62cx\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.322458 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-q62cx\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.322497 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-db-sync-config-data\") pod \"watcher-kuttl-db-sync-q62cx\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.423820 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcmtg\" (UniqueName: \"kubernetes.io/projected/095d1d01-280c-4be3-b49b-c9e25260ad69-kube-api-access-gcmtg\") pod \"watcher-kuttl-db-sync-q62cx\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.423885 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-q62cx\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.423923 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-db-sync-config-data\") pod \"watcher-kuttl-db-sync-q62cx\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.424007 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-config-data\") pod \"watcher-kuttl-db-sync-q62cx\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.428673 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-q62cx\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.429664 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-config-data\") pod \"watcher-kuttl-db-sync-q62cx\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.437587 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-db-sync-config-data\") pod \"watcher-kuttl-db-sync-q62cx\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.442058 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcmtg\" (UniqueName: \"kubernetes.io/projected/095d1d01-280c-4be3-b49b-c9e25260ad69-kube-api-access-gcmtg\") pod \"watcher-kuttl-db-sync-q62cx\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.455967 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:16 crc kubenswrapper[4814]: I1202 14:51:16.998929 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-q62cx"] Dec 02 14:51:17 crc kubenswrapper[4814]: I1202 14:51:17.542211 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" event={"ID":"095d1d01-280c-4be3-b49b-c9e25260ad69","Type":"ContainerStarted","Data":"cf2469296607bb13cdf4b81c85ef987173b274757267032d6d5b7103f350a034"} Dec 02 14:51:17 crc kubenswrapper[4814]: I1202 14:51:17.542730 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" event={"ID":"095d1d01-280c-4be3-b49b-c9e25260ad69","Type":"ContainerStarted","Data":"b791f29061f735e77c6ffceef8c7bcf432fe19088fdfd01881434067c882b60a"} Dec 02 14:51:17 crc kubenswrapper[4814]: I1202 14:51:17.568284 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" podStartSLOduration=1.568262324 podStartE2EDuration="1.568262324s" podCreationTimestamp="2025-12-02 14:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:51:17.563175016 +0000 UTC m=+1339.687537250" watchObservedRunningTime="2025-12-02 14:51:17.568262324 +0000 UTC m=+1339.692624528" Dec 02 14:51:20 crc kubenswrapper[4814]: I1202 14:51:20.568303 4814 generic.go:334] "Generic (PLEG): container finished" podID="095d1d01-280c-4be3-b49b-c9e25260ad69" containerID="cf2469296607bb13cdf4b81c85ef987173b274757267032d6d5b7103f350a034" exitCode=0 Dec 02 14:51:20 crc kubenswrapper[4814]: I1202 14:51:20.568446 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" event={"ID":"095d1d01-280c-4be3-b49b-c9e25260ad69","Type":"ContainerDied","Data":"cf2469296607bb13cdf4b81c85ef987173b274757267032d6d5b7103f350a034"} Dec 02 14:51:20 crc kubenswrapper[4814]: I1202 14:51:20.651442 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:51:20 crc kubenswrapper[4814]: I1202 14:51:20.651521 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:51:21 crc kubenswrapper[4814]: I1202 14:51:21.920701 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.119509 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcmtg\" (UniqueName: \"kubernetes.io/projected/095d1d01-280c-4be3-b49b-c9e25260ad69-kube-api-access-gcmtg\") pod \"095d1d01-280c-4be3-b49b-c9e25260ad69\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.119615 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-db-sync-config-data\") pod \"095d1d01-280c-4be3-b49b-c9e25260ad69\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.119643 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-combined-ca-bundle\") pod \"095d1d01-280c-4be3-b49b-c9e25260ad69\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.119785 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-config-data\") pod \"095d1d01-280c-4be3-b49b-c9e25260ad69\" (UID: \"095d1d01-280c-4be3-b49b-c9e25260ad69\") " Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.126101 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/095d1d01-280c-4be3-b49b-c9e25260ad69-kube-api-access-gcmtg" (OuterVolumeSpecName: "kube-api-access-gcmtg") pod "095d1d01-280c-4be3-b49b-c9e25260ad69" (UID: "095d1d01-280c-4be3-b49b-c9e25260ad69"). InnerVolumeSpecName "kube-api-access-gcmtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.129491 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "095d1d01-280c-4be3-b49b-c9e25260ad69" (UID: "095d1d01-280c-4be3-b49b-c9e25260ad69"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.163387 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "095d1d01-280c-4be3-b49b-c9e25260ad69" (UID: "095d1d01-280c-4be3-b49b-c9e25260ad69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.185526 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-config-data" (OuterVolumeSpecName: "config-data") pod "095d1d01-280c-4be3-b49b-c9e25260ad69" (UID: "095d1d01-280c-4be3-b49b-c9e25260ad69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.222172 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.222214 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcmtg\" (UniqueName: \"kubernetes.io/projected/095d1d01-280c-4be3-b49b-c9e25260ad69-kube-api-access-gcmtg\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.222446 4814 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.222463 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095d1d01-280c-4be3-b49b-c9e25260ad69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.604719 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" event={"ID":"095d1d01-280c-4be3-b49b-c9e25260ad69","Type":"ContainerDied","Data":"b791f29061f735e77c6ffceef8c7bcf432fe19088fdfd01881434067c882b60a"} Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.604789 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b791f29061f735e77c6ffceef8c7bcf432fe19088fdfd01881434067c882b60a" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.604791 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q62cx" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.945240 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:51:22 crc kubenswrapper[4814]: E1202 14:51:22.945833 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095d1d01-280c-4be3-b49b-c9e25260ad69" containerName="watcher-kuttl-db-sync" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.945850 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="095d1d01-280c-4be3-b49b-c9e25260ad69" containerName="watcher-kuttl-db-sync" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.946010 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="095d1d01-280c-4be3-b49b-c9e25260ad69" containerName="watcher-kuttl-db-sync" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.946868 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.948500 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-vgls7" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.949784 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.954791 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.955895 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.958260 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.963826 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:51:22 crc kubenswrapper[4814]: I1202 14:51:22.973088 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.028946 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.030607 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.032829 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.083051 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.140146 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.140199 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.140218 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/663f2b54-4df3-4187-aea7-f45dc1ac6f76-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.140271 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bf4z\" (UniqueName: \"kubernetes.io/projected/663f2b54-4df3-4187-aea7-f45dc1ac6f76-kube-api-access-8bf4z\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.140304 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.140321 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.140348 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.140376 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7jxr\" (UniqueName: \"kubernetes.io/projected/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-kube-api-access-h7jxr\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.140408 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv8c4\" (UniqueName: \"kubernetes.io/projected/cb717eee-c605-4707-b7f1-0418c1d3bdab-kube-api-access-fv8c4\") pod \"watcher-kuttl-applier-0\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.140425 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb717eee-c605-4707-b7f1-0418c1d3bdab-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.140441 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb717eee-c605-4707-b7f1-0418c1d3bdab-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.140459 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb717eee-c605-4707-b7f1-0418c1d3bdab-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.140474 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-logs\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.140490 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242039 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242100 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7jxr\" (UniqueName: \"kubernetes.io/projected/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-kube-api-access-h7jxr\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242137 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv8c4\" (UniqueName: \"kubernetes.io/projected/cb717eee-c605-4707-b7f1-0418c1d3bdab-kube-api-access-fv8c4\") pod \"watcher-kuttl-applier-0\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242155 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb717eee-c605-4707-b7f1-0418c1d3bdab-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242176 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb717eee-c605-4707-b7f1-0418c1d3bdab-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242331 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb717eee-c605-4707-b7f1-0418c1d3bdab-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242350 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-logs\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242368 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242403 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242744 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb717eee-c605-4707-b7f1-0418c1d3bdab-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242760 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-logs\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242910 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242930 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/663f2b54-4df3-4187-aea7-f45dc1ac6f76-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242968 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bf4z\" (UniqueName: \"kubernetes.io/projected/663f2b54-4df3-4187-aea7-f45dc1ac6f76-kube-api-access-8bf4z\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.242998 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.243014 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.243584 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/663f2b54-4df3-4187-aea7-f45dc1ac6f76-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.248174 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.248557 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb717eee-c605-4707-b7f1-0418c1d3bdab-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.251205 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.252705 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.253874 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb717eee-c605-4707-b7f1-0418c1d3bdab-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.254425 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.254593 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.262867 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.265712 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv8c4\" (UniqueName: \"kubernetes.io/projected/cb717eee-c605-4707-b7f1-0418c1d3bdab-kube-api-access-fv8c4\") pod \"watcher-kuttl-applier-0\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.272869 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7jxr\" (UniqueName: \"kubernetes.io/projected/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-kube-api-access-h7jxr\") pod \"watcher-kuttl-api-0\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.279852 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bf4z\" (UniqueName: \"kubernetes.io/projected/663f2b54-4df3-4187-aea7-f45dc1ac6f76-kube-api-access-8bf4z\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.282203 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.355884 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.570725 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.781062 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:51:23 crc kubenswrapper[4814]: W1202 14:51:23.789748 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb717eee_c605_4707_b7f1_0418c1d3bdab.slice/crio-b7d664dcc8ab950f89e226e5b4f0bb1955a03eb9e48f6c9e7421c93063c182eb WatchSource:0}: Error finding container b7d664dcc8ab950f89e226e5b4f0bb1955a03eb9e48f6c9e7421c93063c182eb: Status 404 returned error can't find the container with id b7d664dcc8ab950f89e226e5b4f0bb1955a03eb9e48f6c9e7421c93063c182eb Dec 02 14:51:23 crc kubenswrapper[4814]: I1202 14:51:23.897367 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:51:23 crc kubenswrapper[4814]: W1202 14:51:23.900830 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod663f2b54_4df3_4187_aea7_f45dc1ac6f76.slice/crio-93e1a14d87573c1a56a9d58b1930ba891c0c0c2dcfaed13938f25f323b3cc17a WatchSource:0}: Error finding container 93e1a14d87573c1a56a9d58b1930ba891c0c0c2dcfaed13938f25f323b3cc17a: Status 404 returned error can't find the container with id 93e1a14d87573c1a56a9d58b1930ba891c0c0c2dcfaed13938f25f323b3cc17a Dec 02 14:51:24 crc kubenswrapper[4814]: I1202 14:51:24.038173 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:51:24 crc kubenswrapper[4814]: W1202 14:51:24.054490 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45e08ae5_2eea_4383_b429_36dfdbbdfc6d.slice/crio-10f30f973f370410d5e8c7656e83c9dc07ddd95c152ef4e5ed32bd6f3439a1c7 WatchSource:0}: Error finding container 10f30f973f370410d5e8c7656e83c9dc07ddd95c152ef4e5ed32bd6f3439a1c7: Status 404 returned error can't find the container with id 10f30f973f370410d5e8c7656e83c9dc07ddd95c152ef4e5ed32bd6f3439a1c7 Dec 02 14:51:24 crc kubenswrapper[4814]: I1202 14:51:24.638346 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"663f2b54-4df3-4187-aea7-f45dc1ac6f76","Type":"ContainerStarted","Data":"6249ef8654323e54ae5d3e7c8ae98ef46e209103a09c844327bbd8c7c924659f"} Dec 02 14:51:24 crc kubenswrapper[4814]: I1202 14:51:24.638715 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"663f2b54-4df3-4187-aea7-f45dc1ac6f76","Type":"ContainerStarted","Data":"93e1a14d87573c1a56a9d58b1930ba891c0c0c2dcfaed13938f25f323b3cc17a"} Dec 02 14:51:24 crc kubenswrapper[4814]: I1202 14:51:24.640120 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"cb717eee-c605-4707-b7f1-0418c1d3bdab","Type":"ContainerStarted","Data":"6c9fd704787d2699d24193846a32cab6ccd81bbcc3bf07fda610bca043da1f94"} Dec 02 14:51:24 crc kubenswrapper[4814]: I1202 14:51:24.640171 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"cb717eee-c605-4707-b7f1-0418c1d3bdab","Type":"ContainerStarted","Data":"b7d664dcc8ab950f89e226e5b4f0bb1955a03eb9e48f6c9e7421c93063c182eb"} Dec 02 14:51:24 crc kubenswrapper[4814]: I1202 14:51:24.642572 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"45e08ae5-2eea-4383-b429-36dfdbbdfc6d","Type":"ContainerStarted","Data":"f087375dfe30d116eae38d3a2fe71c2eb012fc9f692588d5d08ed71a4e6cb100"} Dec 02 14:51:24 crc kubenswrapper[4814]: I1202 14:51:24.642603 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"45e08ae5-2eea-4383-b429-36dfdbbdfc6d","Type":"ContainerStarted","Data":"23e1420aa59272a816f74fa2d41e4f71717bb34d6d734b251ad372a09779a476"} Dec 02 14:51:24 crc kubenswrapper[4814]: I1202 14:51:24.642616 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"45e08ae5-2eea-4383-b429-36dfdbbdfc6d","Type":"ContainerStarted","Data":"10f30f973f370410d5e8c7656e83c9dc07ddd95c152ef4e5ed32bd6f3439a1c7"} Dec 02 14:51:24 crc kubenswrapper[4814]: I1202 14:51:24.642820 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:24 crc kubenswrapper[4814]: I1202 14:51:24.644054 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="45e08ae5-2eea-4383-b429-36dfdbbdfc6d" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.140:9322/\": dial tcp 10.217.0.140:9322: connect: connection refused" Dec 02 14:51:24 crc kubenswrapper[4814]: I1202 14:51:24.671048 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.671025511 podStartE2EDuration="2.671025511s" podCreationTimestamp="2025-12-02 14:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:51:24.66057665 +0000 UTC m=+1346.784938864" watchObservedRunningTime="2025-12-02 14:51:24.671025511 +0000 UTC m=+1346.795387715" Dec 02 14:51:24 crc kubenswrapper[4814]: I1202 14:51:24.693668 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.693648837 podStartE2EDuration="2.693648837s" podCreationTimestamp="2025-12-02 14:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:51:24.690299183 +0000 UTC m=+1346.814661407" watchObservedRunningTime="2025-12-02 14:51:24.693648837 +0000 UTC m=+1346.818011041" Dec 02 14:51:24 crc kubenswrapper[4814]: I1202 14:51:24.723220 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.723195986 podStartE2EDuration="2.723195986s" podCreationTimestamp="2025-12-02 14:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:51:24.714998521 +0000 UTC m=+1346.839360725" watchObservedRunningTime="2025-12-02 14:51:24.723195986 +0000 UTC m=+1346.847558190" Dec 02 14:51:27 crc kubenswrapper[4814]: I1202 14:51:27.819025 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:28 crc kubenswrapper[4814]: I1202 14:51:28.282920 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:28 crc kubenswrapper[4814]: I1202 14:51:28.572055 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:33 crc kubenswrapper[4814]: I1202 14:51:33.283977 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:33 crc kubenswrapper[4814]: I1202 14:51:33.331344 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:33 crc kubenswrapper[4814]: I1202 14:51:33.358147 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:33 crc kubenswrapper[4814]: I1202 14:51:33.392681 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:33 crc kubenswrapper[4814]: I1202 14:51:33.572038 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:33 crc kubenswrapper[4814]: I1202 14:51:33.576556 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:33 crc kubenswrapper[4814]: I1202 14:51:33.719437 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:33 crc kubenswrapper[4814]: I1202 14:51:33.726015 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:33 crc kubenswrapper[4814]: I1202 14:51:33.743377 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:33 crc kubenswrapper[4814]: I1202 14:51:33.743428 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:35 crc kubenswrapper[4814]: I1202 14:51:35.080991 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:35 crc kubenswrapper[4814]: I1202 14:51:35.081744 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="ceilometer-central-agent" containerID="cri-o://1407e37671a45e01b1a405f44a7220fddf69396b0b48de02d8b27b5b021421b3" gracePeriod=30 Dec 02 14:51:35 crc kubenswrapper[4814]: I1202 14:51:35.081859 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="sg-core" containerID="cri-o://b2b693bec5c3a6178a472950c827d5a8ec466715f81677f8c440673c969f153c" gracePeriod=30 Dec 02 14:51:35 crc kubenswrapper[4814]: I1202 14:51:35.081885 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="ceilometer-notification-agent" containerID="cri-o://e81fd6ae2e8b2015243c654253ef244efac61dda98b120dc2a6c5a67e00b5d5e" gracePeriod=30 Dec 02 14:51:35 crc kubenswrapper[4814]: I1202 14:51:35.082008 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="proxy-httpd" containerID="cri-o://dd4b6c30cf65f001dad3a6c31a2f44afe9c136770483d949a1f164ed1c327772" gracePeriod=30 Dec 02 14:51:35 crc kubenswrapper[4814]: I1202 14:51:35.092569 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.136:3000/\": EOF" Dec 02 14:51:35 crc kubenswrapper[4814]: I1202 14:51:35.739329 4814 generic.go:334] "Generic (PLEG): container finished" podID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerID="dd4b6c30cf65f001dad3a6c31a2f44afe9c136770483d949a1f164ed1c327772" exitCode=0 Dec 02 14:51:35 crc kubenswrapper[4814]: I1202 14:51:35.739367 4814 generic.go:334] "Generic (PLEG): container finished" podID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerID="b2b693bec5c3a6178a472950c827d5a8ec466715f81677f8c440673c969f153c" exitCode=2 Dec 02 14:51:35 crc kubenswrapper[4814]: I1202 14:51:35.739377 4814 generic.go:334] "Generic (PLEG): container finished" podID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerID="1407e37671a45e01b1a405f44a7220fddf69396b0b48de02d8b27b5b021421b3" exitCode=0 Dec 02 14:51:35 crc kubenswrapper[4814]: I1202 14:51:35.739367 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b91ad288-708b-4cd8-9dc9-b2530b628c9c","Type":"ContainerDied","Data":"dd4b6c30cf65f001dad3a6c31a2f44afe9c136770483d949a1f164ed1c327772"} Dec 02 14:51:35 crc kubenswrapper[4814]: I1202 14:51:35.739476 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b91ad288-708b-4cd8-9dc9-b2530b628c9c","Type":"ContainerDied","Data":"b2b693bec5c3a6178a472950c827d5a8ec466715f81677f8c440673c969f153c"} Dec 02 14:51:35 crc kubenswrapper[4814]: I1202 14:51:35.739508 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b91ad288-708b-4cd8-9dc9-b2530b628c9c","Type":"ContainerDied","Data":"1407e37671a45e01b1a405f44a7220fddf69396b0b48de02d8b27b5b021421b3"} Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.080640 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-q62cx"] Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.086427 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-q62cx"] Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.122985 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherbb33-account-delete-pzgdl"] Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.124590 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherbb33-account-delete-pzgdl" Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.134494 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.153861 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="095d1d01-280c-4be3-b49b-c9e25260ad69" path="/var/lib/kubelet/pods/095d1d01-280c-4be3-b49b-c9e25260ad69/volumes" Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.173495 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherbb33-account-delete-pzgdl"] Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.188921 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.189135 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="45e08ae5-2eea-4383-b429-36dfdbbdfc6d" containerName="watcher-kuttl-api-log" containerID="cri-o://23e1420aa59272a816f74fa2d41e4f71717bb34d6d734b251ad372a09779a476" gracePeriod=30 Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.190823 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="45e08ae5-2eea-4383-b429-36dfdbbdfc6d" containerName="watcher-api" containerID="cri-o://f087375dfe30d116eae38d3a2fe71c2eb012fc9f692588d5d08ed71a4e6cb100" gracePeriod=30 Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.196194 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.196369 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="cb717eee-c605-4707-b7f1-0418c1d3bdab" containerName="watcher-applier" containerID="cri-o://6c9fd704787d2699d24193846a32cab6ccd81bbcc3bf07fda610bca043da1f94" gracePeriod=30 Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.266742 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-828x5\" (UniqueName: \"kubernetes.io/projected/b7c4bf33-9bdc-4810-a7e8-2b469cd055e3-kube-api-access-828x5\") pod \"watcherbb33-account-delete-pzgdl\" (UID: \"b7c4bf33-9bdc-4810-a7e8-2b469cd055e3\") " pod="watcher-kuttl-default/watcherbb33-account-delete-pzgdl" Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.266908 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c4bf33-9bdc-4810-a7e8-2b469cd055e3-operator-scripts\") pod \"watcherbb33-account-delete-pzgdl\" (UID: \"b7c4bf33-9bdc-4810-a7e8-2b469cd055e3\") " pod="watcher-kuttl-default/watcherbb33-account-delete-pzgdl" Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.368599 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c4bf33-9bdc-4810-a7e8-2b469cd055e3-operator-scripts\") pod \"watcherbb33-account-delete-pzgdl\" (UID: \"b7c4bf33-9bdc-4810-a7e8-2b469cd055e3\") " pod="watcher-kuttl-default/watcherbb33-account-delete-pzgdl" Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.368691 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-828x5\" (UniqueName: \"kubernetes.io/projected/b7c4bf33-9bdc-4810-a7e8-2b469cd055e3-kube-api-access-828x5\") pod \"watcherbb33-account-delete-pzgdl\" (UID: \"b7c4bf33-9bdc-4810-a7e8-2b469cd055e3\") " pod="watcher-kuttl-default/watcherbb33-account-delete-pzgdl" Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.369425 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c4bf33-9bdc-4810-a7e8-2b469cd055e3-operator-scripts\") pod \"watcherbb33-account-delete-pzgdl\" (UID: \"b7c4bf33-9bdc-4810-a7e8-2b469cd055e3\") " pod="watcher-kuttl-default/watcherbb33-account-delete-pzgdl" Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.404997 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-828x5\" (UniqueName: \"kubernetes.io/projected/b7c4bf33-9bdc-4810-a7e8-2b469cd055e3-kube-api-access-828x5\") pod \"watcherbb33-account-delete-pzgdl\" (UID: \"b7c4bf33-9bdc-4810-a7e8-2b469cd055e3\") " pod="watcher-kuttl-default/watcherbb33-account-delete-pzgdl" Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.439006 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherbb33-account-delete-pzgdl" Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.748618 4814 generic.go:334] "Generic (PLEG): container finished" podID="45e08ae5-2eea-4383-b429-36dfdbbdfc6d" containerID="23e1420aa59272a816f74fa2d41e4f71717bb34d6d734b251ad372a09779a476" exitCode=143 Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.748978 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="663f2b54-4df3-4187-aea7-f45dc1ac6f76" containerName="watcher-decision-engine" containerID="cri-o://6249ef8654323e54ae5d3e7c8ae98ef46e209103a09c844327bbd8c7c924659f" gracePeriod=30 Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.749200 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"45e08ae5-2eea-4383-b429-36dfdbbdfc6d","Type":"ContainerDied","Data":"23e1420aa59272a816f74fa2d41e4f71717bb34d6d734b251ad372a09779a476"} Dec 02 14:51:36 crc kubenswrapper[4814]: I1202 14:51:36.904876 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherbb33-account-delete-pzgdl"] Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.288985 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.389214 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b91ad288-708b-4cd8-9dc9-b2530b628c9c-log-httpd\") pod \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.389330 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-combined-ca-bundle\") pod \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.389523 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-ceilometer-tls-certs\") pod \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.389572 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b91ad288-708b-4cd8-9dc9-b2530b628c9c-run-httpd\") pod \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.389615 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-sg-core-conf-yaml\") pod \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.389661 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-config-data\") pod \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.389677 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-scripts\") pod \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.389712 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdrhj\" (UniqueName: \"kubernetes.io/projected/b91ad288-708b-4cd8-9dc9-b2530b628c9c-kube-api-access-xdrhj\") pod \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\" (UID: \"b91ad288-708b-4cd8-9dc9-b2530b628c9c\") " Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.390107 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b91ad288-708b-4cd8-9dc9-b2530b628c9c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b91ad288-708b-4cd8-9dc9-b2530b628c9c" (UID: "b91ad288-708b-4cd8-9dc9-b2530b628c9c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.394199 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b91ad288-708b-4cd8-9dc9-b2530b628c9c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b91ad288-708b-4cd8-9dc9-b2530b628c9c" (UID: "b91ad288-708b-4cd8-9dc9-b2530b628c9c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.409582 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-scripts" (OuterVolumeSpecName: "scripts") pod "b91ad288-708b-4cd8-9dc9-b2530b628c9c" (UID: "b91ad288-708b-4cd8-9dc9-b2530b628c9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.414457 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b91ad288-708b-4cd8-9dc9-b2530b628c9c-kube-api-access-xdrhj" (OuterVolumeSpecName: "kube-api-access-xdrhj") pod "b91ad288-708b-4cd8-9dc9-b2530b628c9c" (UID: "b91ad288-708b-4cd8-9dc9-b2530b628c9c"). InnerVolumeSpecName "kube-api-access-xdrhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.421627 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b91ad288-708b-4cd8-9dc9-b2530b628c9c" (UID: "b91ad288-708b-4cd8-9dc9-b2530b628c9c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.492428 4814 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b91ad288-708b-4cd8-9dc9-b2530b628c9c-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.494358 4814 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.494389 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.494400 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdrhj\" (UniqueName: \"kubernetes.io/projected/b91ad288-708b-4cd8-9dc9-b2530b628c9c-kube-api-access-xdrhj\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.494410 4814 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b91ad288-708b-4cd8-9dc9-b2530b628c9c-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.516374 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b91ad288-708b-4cd8-9dc9-b2530b628c9c" (UID: "b91ad288-708b-4cd8-9dc9-b2530b628c9c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.544371 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-config-data" (OuterVolumeSpecName: "config-data") pod "b91ad288-708b-4cd8-9dc9-b2530b628c9c" (UID: "b91ad288-708b-4cd8-9dc9-b2530b628c9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.554179 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b91ad288-708b-4cd8-9dc9-b2530b628c9c" (UID: "b91ad288-708b-4cd8-9dc9-b2530b628c9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.596210 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.596255 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.596267 4814 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b91ad288-708b-4cd8-9dc9-b2530b628c9c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.632633 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.760194 4814 generic.go:334] "Generic (PLEG): container finished" podID="b7c4bf33-9bdc-4810-a7e8-2b469cd055e3" containerID="7d01f2f77838a309ed7b99de6c14e4248ac2f46ba15901a908cb94e7a925c00c" exitCode=0 Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.760294 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherbb33-account-delete-pzgdl" event={"ID":"b7c4bf33-9bdc-4810-a7e8-2b469cd055e3","Type":"ContainerDied","Data":"7d01f2f77838a309ed7b99de6c14e4248ac2f46ba15901a908cb94e7a925c00c"} Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.760326 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherbb33-account-delete-pzgdl" event={"ID":"b7c4bf33-9bdc-4810-a7e8-2b469cd055e3","Type":"ContainerStarted","Data":"ef2dd2b77571119d9919bbf7578e93eaec17f86c6416b2743543f251873ec32b"} Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.762240 4814 generic.go:334] "Generic (PLEG): container finished" podID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerID="e81fd6ae2e8b2015243c654253ef244efac61dda98b120dc2a6c5a67e00b5d5e" exitCode=0 Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.762301 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b91ad288-708b-4cd8-9dc9-b2530b628c9c","Type":"ContainerDied","Data":"e81fd6ae2e8b2015243c654253ef244efac61dda98b120dc2a6c5a67e00b5d5e"} Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.762322 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b91ad288-708b-4cd8-9dc9-b2530b628c9c","Type":"ContainerDied","Data":"dd0bb3445452a472b1f7d3c70be9a554a5b97bd7af1f093c2d88e30742078a10"} Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.762341 4814 scope.go:117] "RemoveContainer" containerID="dd4b6c30cf65f001dad3a6c31a2f44afe9c136770483d949a1f164ed1c327772" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.762509 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.779522 4814 generic.go:334] "Generic (PLEG): container finished" podID="45e08ae5-2eea-4383-b429-36dfdbbdfc6d" containerID="f087375dfe30d116eae38d3a2fe71c2eb012fc9f692588d5d08ed71a4e6cb100" exitCode=0 Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.779566 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"45e08ae5-2eea-4383-b429-36dfdbbdfc6d","Type":"ContainerDied","Data":"f087375dfe30d116eae38d3a2fe71c2eb012fc9f692588d5d08ed71a4e6cb100"} Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.779593 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"45e08ae5-2eea-4383-b429-36dfdbbdfc6d","Type":"ContainerDied","Data":"10f30f973f370410d5e8c7656e83c9dc07ddd95c152ef4e5ed32bd6f3439a1c7"} Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.779648 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.805718 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-combined-ca-bundle\") pod \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.805791 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7jxr\" (UniqueName: \"kubernetes.io/projected/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-kube-api-access-h7jxr\") pod \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.805946 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-logs\") pod \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.805994 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-config-data\") pod \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.806017 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-custom-prometheus-ca\") pod \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\" (UID: \"45e08ae5-2eea-4383-b429-36dfdbbdfc6d\") " Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.807041 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-logs" (OuterVolumeSpecName: "logs") pod "45e08ae5-2eea-4383-b429-36dfdbbdfc6d" (UID: "45e08ae5-2eea-4383-b429-36dfdbbdfc6d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.810693 4814 scope.go:117] "RemoveContainer" containerID="b2b693bec5c3a6178a472950c827d5a8ec466715f81677f8c440673c969f153c" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.835521 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-kube-api-access-h7jxr" (OuterVolumeSpecName: "kube-api-access-h7jxr") pod "45e08ae5-2eea-4383-b429-36dfdbbdfc6d" (UID: "45e08ae5-2eea-4383-b429-36dfdbbdfc6d"). InnerVolumeSpecName "kube-api-access-h7jxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.852631 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.897412 4814 scope.go:117] "RemoveContainer" containerID="e81fd6ae2e8b2015243c654253ef244efac61dda98b120dc2a6c5a67e00b5d5e" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.910593 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.912214 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.912240 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7jxr\" (UniqueName: \"kubernetes.io/projected/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-kube-api-access-h7jxr\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.921433 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45e08ae5-2eea-4383-b429-36dfdbbdfc6d" (UID: "45e08ae5-2eea-4383-b429-36dfdbbdfc6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.943886 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:37 crc kubenswrapper[4814]: E1202 14:51:37.944477 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="proxy-httpd" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.944497 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="proxy-httpd" Dec 02 14:51:37 crc kubenswrapper[4814]: E1202 14:51:37.944518 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="ceilometer-notification-agent" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.944525 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="ceilometer-notification-agent" Dec 02 14:51:37 crc kubenswrapper[4814]: E1202 14:51:37.944540 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e08ae5-2eea-4383-b429-36dfdbbdfc6d" containerName="watcher-api" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.944545 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e08ae5-2eea-4383-b429-36dfdbbdfc6d" containerName="watcher-api" Dec 02 14:51:37 crc kubenswrapper[4814]: E1202 14:51:37.944555 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="ceilometer-central-agent" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.944561 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="ceilometer-central-agent" Dec 02 14:51:37 crc kubenswrapper[4814]: E1202 14:51:37.944576 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="sg-core" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.944581 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="sg-core" Dec 02 14:51:37 crc kubenswrapper[4814]: E1202 14:51:37.944588 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e08ae5-2eea-4383-b429-36dfdbbdfc6d" containerName="watcher-kuttl-api-log" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.944595 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e08ae5-2eea-4383-b429-36dfdbbdfc6d" containerName="watcher-kuttl-api-log" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.945012 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="ceilometer-notification-agent" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.950244 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="sg-core" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.950277 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e08ae5-2eea-4383-b429-36dfdbbdfc6d" containerName="watcher-kuttl-api-log" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.950294 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e08ae5-2eea-4383-b429-36dfdbbdfc6d" containerName="watcher-api" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.950303 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="ceilometer-central-agent" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.950314 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" containerName="proxy-httpd" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.951894 4814 scope.go:117] "RemoveContainer" containerID="1407e37671a45e01b1a405f44a7220fddf69396b0b48de02d8b27b5b021421b3" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.951910 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.953885 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.954039 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.954144 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.955826 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "45e08ae5-2eea-4383-b429-36dfdbbdfc6d" (UID: "45e08ae5-2eea-4383-b429-36dfdbbdfc6d"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.956673 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-config-data" (OuterVolumeSpecName: "config-data") pod "45e08ae5-2eea-4383-b429-36dfdbbdfc6d" (UID: "45e08ae5-2eea-4383-b429-36dfdbbdfc6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.962707 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.994991 4814 scope.go:117] "RemoveContainer" containerID="dd4b6c30cf65f001dad3a6c31a2f44afe9c136770483d949a1f164ed1c327772" Dec 02 14:51:37 crc kubenswrapper[4814]: E1202 14:51:37.995896 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd4b6c30cf65f001dad3a6c31a2f44afe9c136770483d949a1f164ed1c327772\": container with ID starting with dd4b6c30cf65f001dad3a6c31a2f44afe9c136770483d949a1f164ed1c327772 not found: ID does not exist" containerID="dd4b6c30cf65f001dad3a6c31a2f44afe9c136770483d949a1f164ed1c327772" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.995941 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4b6c30cf65f001dad3a6c31a2f44afe9c136770483d949a1f164ed1c327772"} err="failed to get container status \"dd4b6c30cf65f001dad3a6c31a2f44afe9c136770483d949a1f164ed1c327772\": rpc error: code = NotFound desc = could not find container \"dd4b6c30cf65f001dad3a6c31a2f44afe9c136770483d949a1f164ed1c327772\": container with ID starting with dd4b6c30cf65f001dad3a6c31a2f44afe9c136770483d949a1f164ed1c327772 not found: ID does not exist" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.995968 4814 scope.go:117] "RemoveContainer" containerID="b2b693bec5c3a6178a472950c827d5a8ec466715f81677f8c440673c969f153c" Dec 02 14:51:37 crc kubenswrapper[4814]: E1202 14:51:37.996460 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2b693bec5c3a6178a472950c827d5a8ec466715f81677f8c440673c969f153c\": container with ID starting with b2b693bec5c3a6178a472950c827d5a8ec466715f81677f8c440673c969f153c not found: ID does not exist" containerID="b2b693bec5c3a6178a472950c827d5a8ec466715f81677f8c440673c969f153c" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.996484 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2b693bec5c3a6178a472950c827d5a8ec466715f81677f8c440673c969f153c"} err="failed to get container status \"b2b693bec5c3a6178a472950c827d5a8ec466715f81677f8c440673c969f153c\": rpc error: code = NotFound desc = could not find container \"b2b693bec5c3a6178a472950c827d5a8ec466715f81677f8c440673c969f153c\": container with ID starting with b2b693bec5c3a6178a472950c827d5a8ec466715f81677f8c440673c969f153c not found: ID does not exist" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.996497 4814 scope.go:117] "RemoveContainer" containerID="e81fd6ae2e8b2015243c654253ef244efac61dda98b120dc2a6c5a67e00b5d5e" Dec 02 14:51:37 crc kubenswrapper[4814]: E1202 14:51:37.997833 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e81fd6ae2e8b2015243c654253ef244efac61dda98b120dc2a6c5a67e00b5d5e\": container with ID starting with e81fd6ae2e8b2015243c654253ef244efac61dda98b120dc2a6c5a67e00b5d5e not found: ID does not exist" containerID="e81fd6ae2e8b2015243c654253ef244efac61dda98b120dc2a6c5a67e00b5d5e" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.997926 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e81fd6ae2e8b2015243c654253ef244efac61dda98b120dc2a6c5a67e00b5d5e"} err="failed to get container status \"e81fd6ae2e8b2015243c654253ef244efac61dda98b120dc2a6c5a67e00b5d5e\": rpc error: code = NotFound desc = could not find container \"e81fd6ae2e8b2015243c654253ef244efac61dda98b120dc2a6c5a67e00b5d5e\": container with ID starting with e81fd6ae2e8b2015243c654253ef244efac61dda98b120dc2a6c5a67e00b5d5e not found: ID does not exist" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.997996 4814 scope.go:117] "RemoveContainer" containerID="1407e37671a45e01b1a405f44a7220fddf69396b0b48de02d8b27b5b021421b3" Dec 02 14:51:37 crc kubenswrapper[4814]: E1202 14:51:37.998415 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1407e37671a45e01b1a405f44a7220fddf69396b0b48de02d8b27b5b021421b3\": container with ID starting with 1407e37671a45e01b1a405f44a7220fddf69396b0b48de02d8b27b5b021421b3 not found: ID does not exist" containerID="1407e37671a45e01b1a405f44a7220fddf69396b0b48de02d8b27b5b021421b3" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.998482 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1407e37671a45e01b1a405f44a7220fddf69396b0b48de02d8b27b5b021421b3"} err="failed to get container status \"1407e37671a45e01b1a405f44a7220fddf69396b0b48de02d8b27b5b021421b3\": rpc error: code = NotFound desc = could not find container \"1407e37671a45e01b1a405f44a7220fddf69396b0b48de02d8b27b5b021421b3\": container with ID starting with 1407e37671a45e01b1a405f44a7220fddf69396b0b48de02d8b27b5b021421b3 not found: ID does not exist" Dec 02 14:51:37 crc kubenswrapper[4814]: I1202 14:51:37.998514 4814 scope.go:117] "RemoveContainer" containerID="f087375dfe30d116eae38d3a2fe71c2eb012fc9f692588d5d08ed71a4e6cb100" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.013952 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.014146 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.014203 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45e08ae5-2eea-4383-b429-36dfdbbdfc6d-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.020221 4814 scope.go:117] "RemoveContainer" containerID="23e1420aa59272a816f74fa2d41e4f71717bb34d6d734b251ad372a09779a476" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.038413 4814 scope.go:117] "RemoveContainer" containerID="f087375dfe30d116eae38d3a2fe71c2eb012fc9f692588d5d08ed71a4e6cb100" Dec 02 14:51:38 crc kubenswrapper[4814]: E1202 14:51:38.038836 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f087375dfe30d116eae38d3a2fe71c2eb012fc9f692588d5d08ed71a4e6cb100\": container with ID starting with f087375dfe30d116eae38d3a2fe71c2eb012fc9f692588d5d08ed71a4e6cb100 not found: ID does not exist" containerID="f087375dfe30d116eae38d3a2fe71c2eb012fc9f692588d5d08ed71a4e6cb100" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.038875 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f087375dfe30d116eae38d3a2fe71c2eb012fc9f692588d5d08ed71a4e6cb100"} err="failed to get container status \"f087375dfe30d116eae38d3a2fe71c2eb012fc9f692588d5d08ed71a4e6cb100\": rpc error: code = NotFound desc = could not find container \"f087375dfe30d116eae38d3a2fe71c2eb012fc9f692588d5d08ed71a4e6cb100\": container with ID starting with f087375dfe30d116eae38d3a2fe71c2eb012fc9f692588d5d08ed71a4e6cb100 not found: ID does not exist" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.038914 4814 scope.go:117] "RemoveContainer" containerID="23e1420aa59272a816f74fa2d41e4f71717bb34d6d734b251ad372a09779a476" Dec 02 14:51:38 crc kubenswrapper[4814]: E1202 14:51:38.039230 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23e1420aa59272a816f74fa2d41e4f71717bb34d6d734b251ad372a09779a476\": container with ID starting with 23e1420aa59272a816f74fa2d41e4f71717bb34d6d734b251ad372a09779a476 not found: ID does not exist" containerID="23e1420aa59272a816f74fa2d41e4f71717bb34d6d734b251ad372a09779a476" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.039339 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e1420aa59272a816f74fa2d41e4f71717bb34d6d734b251ad372a09779a476"} err="failed to get container status \"23e1420aa59272a816f74fa2d41e4f71717bb34d6d734b251ad372a09779a476\": rpc error: code = NotFound desc = could not find container \"23e1420aa59272a816f74fa2d41e4f71717bb34d6d734b251ad372a09779a476\": container with ID starting with 23e1420aa59272a816f74fa2d41e4f71717bb34d6d734b251ad372a09779a476 not found: ID does not exist" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.112803 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.116573 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28x9p\" (UniqueName: \"kubernetes.io/projected/45e1e4d7-bd31-4615-88c9-f28b59112c39-kube-api-access-28x9p\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.116616 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-scripts\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.116651 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.116671 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.116689 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45e1e4d7-bd31-4615-88c9-f28b59112c39-run-httpd\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.116791 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45e1e4d7-bd31-4615-88c9-f28b59112c39-log-httpd\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.116835 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.116905 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-config-data\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.121538 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.158214 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e08ae5-2eea-4383-b429-36dfdbbdfc6d" path="/var/lib/kubelet/pods/45e08ae5-2eea-4383-b429-36dfdbbdfc6d/volumes" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.159126 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b91ad288-708b-4cd8-9dc9-b2530b628c9c" path="/var/lib/kubelet/pods/b91ad288-708b-4cd8-9dc9-b2530b628c9c/volumes" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.218397 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28x9p\" (UniqueName: \"kubernetes.io/projected/45e1e4d7-bd31-4615-88c9-f28b59112c39-kube-api-access-28x9p\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.218708 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-scripts\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.218872 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.218964 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.219073 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45e1e4d7-bd31-4615-88c9-f28b59112c39-run-httpd\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.219158 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45e1e4d7-bd31-4615-88c9-f28b59112c39-log-httpd\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.219290 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.219476 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-config-data\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.220550 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45e1e4d7-bd31-4615-88c9-f28b59112c39-log-httpd\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.220766 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45e1e4d7-bd31-4615-88c9-f28b59112c39-run-httpd\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.223764 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.223825 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.223924 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-config-data\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.224000 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-scripts\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.224682 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.238863 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28x9p\" (UniqueName: \"kubernetes.io/projected/45e1e4d7-bd31-4615-88c9-f28b59112c39-kube-api-access-28x9p\") pod \"ceilometer-0\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.278831 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:38 crc kubenswrapper[4814]: E1202 14:51:38.285218 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c9fd704787d2699d24193846a32cab6ccd81bbcc3bf07fda610bca043da1f94" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 14:51:38 crc kubenswrapper[4814]: E1202 14:51:38.286278 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c9fd704787d2699d24193846a32cab6ccd81bbcc3bf07fda610bca043da1f94" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 14:51:38 crc kubenswrapper[4814]: E1202 14:51:38.287660 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c9fd704787d2699d24193846a32cab6ccd81bbcc3bf07fda610bca043da1f94" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 14:51:38 crc kubenswrapper[4814]: E1202 14:51:38.287698 4814 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="cb717eee-c605-4707-b7f1-0418c1d3bdab" containerName="watcher-applier" Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.734697 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:38 crc kubenswrapper[4814]: W1202 14:51:38.741331 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45e1e4d7_bd31_4615_88c9_f28b59112c39.slice/crio-05085770e3f0b6e910d5543cbf8a145ad4a5425563cf17274e680ce1a22f731f WatchSource:0}: Error finding container 05085770e3f0b6e910d5543cbf8a145ad4a5425563cf17274e680ce1a22f731f: Status 404 returned error can't find the container with id 05085770e3f0b6e910d5543cbf8a145ad4a5425563cf17274e680ce1a22f731f Dec 02 14:51:38 crc kubenswrapper[4814]: I1202 14:51:38.793812 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"45e1e4d7-bd31-4615-88c9-f28b59112c39","Type":"ContainerStarted","Data":"05085770e3f0b6e910d5543cbf8a145ad4a5425563cf17274e680ce1a22f731f"} Dec 02 14:51:39 crc kubenswrapper[4814]: I1202 14:51:39.179426 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherbb33-account-delete-pzgdl" Dec 02 14:51:39 crc kubenswrapper[4814]: I1202 14:51:39.230992 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-828x5\" (UniqueName: \"kubernetes.io/projected/b7c4bf33-9bdc-4810-a7e8-2b469cd055e3-kube-api-access-828x5\") pod \"b7c4bf33-9bdc-4810-a7e8-2b469cd055e3\" (UID: \"b7c4bf33-9bdc-4810-a7e8-2b469cd055e3\") " Dec 02 14:51:39 crc kubenswrapper[4814]: I1202 14:51:39.231730 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c4bf33-9bdc-4810-a7e8-2b469cd055e3-operator-scripts\") pod \"b7c4bf33-9bdc-4810-a7e8-2b469cd055e3\" (UID: \"b7c4bf33-9bdc-4810-a7e8-2b469cd055e3\") " Dec 02 14:51:39 crc kubenswrapper[4814]: I1202 14:51:39.232529 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c4bf33-9bdc-4810-a7e8-2b469cd055e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7c4bf33-9bdc-4810-a7e8-2b469cd055e3" (UID: "b7c4bf33-9bdc-4810-a7e8-2b469cd055e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:51:39 crc kubenswrapper[4814]: I1202 14:51:39.232931 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c4bf33-9bdc-4810-a7e8-2b469cd055e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:39 crc kubenswrapper[4814]: I1202 14:51:39.235016 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c4bf33-9bdc-4810-a7e8-2b469cd055e3-kube-api-access-828x5" (OuterVolumeSpecName: "kube-api-access-828x5") pod "b7c4bf33-9bdc-4810-a7e8-2b469cd055e3" (UID: "b7c4bf33-9bdc-4810-a7e8-2b469cd055e3"). InnerVolumeSpecName "kube-api-access-828x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:51:39 crc kubenswrapper[4814]: I1202 14:51:39.338158 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-828x5\" (UniqueName: \"kubernetes.io/projected/b7c4bf33-9bdc-4810-a7e8-2b469cd055e3-kube-api-access-828x5\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:39 crc kubenswrapper[4814]: I1202 14:51:39.825938 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"45e1e4d7-bd31-4615-88c9-f28b59112c39","Type":"ContainerStarted","Data":"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7"} Dec 02 14:51:39 crc kubenswrapper[4814]: I1202 14:51:39.828643 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherbb33-account-delete-pzgdl" event={"ID":"b7c4bf33-9bdc-4810-a7e8-2b469cd055e3","Type":"ContainerDied","Data":"ef2dd2b77571119d9919bbf7578e93eaec17f86c6416b2743543f251873ec32b"} Dec 02 14:51:39 crc kubenswrapper[4814]: I1202 14:51:39.828677 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef2dd2b77571119d9919bbf7578e93eaec17f86c6416b2743543f251873ec32b" Dec 02 14:51:39 crc kubenswrapper[4814]: I1202 14:51:39.828739 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherbb33-account-delete-pzgdl" Dec 02 14:51:39 crc kubenswrapper[4814]: I1202 14:51:39.878425 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:40 crc kubenswrapper[4814]: I1202 14:51:40.844210 4814 generic.go:334] "Generic (PLEG): container finished" podID="cb717eee-c605-4707-b7f1-0418c1d3bdab" containerID="6c9fd704787d2699d24193846a32cab6ccd81bbcc3bf07fda610bca043da1f94" exitCode=0 Dec 02 14:51:40 crc kubenswrapper[4814]: I1202 14:51:40.844307 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"cb717eee-c605-4707-b7f1-0418c1d3bdab","Type":"ContainerDied","Data":"6c9fd704787d2699d24193846a32cab6ccd81bbcc3bf07fda610bca043da1f94"} Dec 02 14:51:40 crc kubenswrapper[4814]: I1202 14:51:40.850925 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"45e1e4d7-bd31-4615-88c9-f28b59112c39","Type":"ContainerStarted","Data":"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996"} Dec 02 14:51:40 crc kubenswrapper[4814]: I1202 14:51:40.972460 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.060889 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb717eee-c605-4707-b7f1-0418c1d3bdab-combined-ca-bundle\") pod \"cb717eee-c605-4707-b7f1-0418c1d3bdab\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.061285 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb717eee-c605-4707-b7f1-0418c1d3bdab-logs\") pod \"cb717eee-c605-4707-b7f1-0418c1d3bdab\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.061390 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb717eee-c605-4707-b7f1-0418c1d3bdab-config-data\") pod \"cb717eee-c605-4707-b7f1-0418c1d3bdab\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.061466 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv8c4\" (UniqueName: \"kubernetes.io/projected/cb717eee-c605-4707-b7f1-0418c1d3bdab-kube-api-access-fv8c4\") pod \"cb717eee-c605-4707-b7f1-0418c1d3bdab\" (UID: \"cb717eee-c605-4707-b7f1-0418c1d3bdab\") " Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.061739 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb717eee-c605-4707-b7f1-0418c1d3bdab-logs" (OuterVolumeSpecName: "logs") pod "cb717eee-c605-4707-b7f1-0418c1d3bdab" (UID: "cb717eee-c605-4707-b7f1-0418c1d3bdab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.070394 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb717eee-c605-4707-b7f1-0418c1d3bdab-kube-api-access-fv8c4" (OuterVolumeSpecName: "kube-api-access-fv8c4") pod "cb717eee-c605-4707-b7f1-0418c1d3bdab" (UID: "cb717eee-c605-4707-b7f1-0418c1d3bdab"). InnerVolumeSpecName "kube-api-access-fv8c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.108456 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb717eee-c605-4707-b7f1-0418c1d3bdab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb717eee-c605-4707-b7f1-0418c1d3bdab" (UID: "cb717eee-c605-4707-b7f1-0418c1d3bdab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.117465 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb717eee-c605-4707-b7f1-0418c1d3bdab-config-data" (OuterVolumeSpecName: "config-data") pod "cb717eee-c605-4707-b7f1-0418c1d3bdab" (UID: "cb717eee-c605-4707-b7f1-0418c1d3bdab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.157280 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-j4cvr"] Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.159688 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-j4cvr"] Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.162843 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb717eee-c605-4707-b7f1-0418c1d3bdab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.162869 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb717eee-c605-4707-b7f1-0418c1d3bdab-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.162881 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb717eee-c605-4707-b7f1-0418c1d3bdab-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.162890 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv8c4\" (UniqueName: \"kubernetes.io/projected/cb717eee-c605-4707-b7f1-0418c1d3bdab-kube-api-access-fv8c4\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.170190 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr"] Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.180088 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherbb33-account-delete-pzgdl"] Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.190295 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherbb33-account-delete-pzgdl"] Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.196134 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-bb33-account-create-update-9sfxr"] Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.862557 4814 generic.go:334] "Generic (PLEG): container finished" podID="663f2b54-4df3-4187-aea7-f45dc1ac6f76" containerID="6249ef8654323e54ae5d3e7c8ae98ef46e209103a09c844327bbd8c7c924659f" exitCode=0 Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.862602 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"663f2b54-4df3-4187-aea7-f45dc1ac6f76","Type":"ContainerDied","Data":"6249ef8654323e54ae5d3e7c8ae98ef46e209103a09c844327bbd8c7c924659f"} Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.864491 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"45e1e4d7-bd31-4615-88c9-f28b59112c39","Type":"ContainerStarted","Data":"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a"} Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.865703 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"cb717eee-c605-4707-b7f1-0418c1d3bdab","Type":"ContainerDied","Data":"b7d664dcc8ab950f89e226e5b4f0bb1955a03eb9e48f6c9e7421c93063c182eb"} Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.865726 4814 scope.go:117] "RemoveContainer" containerID="6c9fd704787d2699d24193846a32cab6ccd81bbcc3bf07fda610bca043da1f94" Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.865818 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.912284 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:51:41 crc kubenswrapper[4814]: I1202 14:51:41.925815 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.167096 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c6b7a74-3bde-4eb4-816b-97124f6bd079" path="/var/lib/kubelet/pods/8c6b7a74-3bde-4eb4-816b-97124f6bd079/volumes" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.167712 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c4bf33-9bdc-4810-a7e8-2b469cd055e3" path="/var/lib/kubelet/pods/b7c4bf33-9bdc-4810-a7e8-2b469cd055e3/volumes" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.168201 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb717eee-c605-4707-b7f1-0418c1d3bdab" path="/var/lib/kubelet/pods/cb717eee-c605-4707-b7f1-0418c1d3bdab/volumes" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.169127 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8875ebe-e290-4e4d-995e-1c3ddb5ed526" path="/var/lib/kubelet/pods/d8875ebe-e290-4e4d-995e-1c3ddb5ed526/volumes" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.414896 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.484230 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-custom-prometheus-ca\") pod \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.484371 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-config-data\") pod \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.484432 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/663f2b54-4df3-4187-aea7-f45dc1ac6f76-logs\") pod \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.484514 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bf4z\" (UniqueName: \"kubernetes.io/projected/663f2b54-4df3-4187-aea7-f45dc1ac6f76-kube-api-access-8bf4z\") pod \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.484562 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-combined-ca-bundle\") pod \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\" (UID: \"663f2b54-4df3-4187-aea7-f45dc1ac6f76\") " Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.485311 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/663f2b54-4df3-4187-aea7-f45dc1ac6f76-logs" (OuterVolumeSpecName: "logs") pod "663f2b54-4df3-4187-aea7-f45dc1ac6f76" (UID: "663f2b54-4df3-4187-aea7-f45dc1ac6f76"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.489897 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/663f2b54-4df3-4187-aea7-f45dc1ac6f76-kube-api-access-8bf4z" (OuterVolumeSpecName: "kube-api-access-8bf4z") pod "663f2b54-4df3-4187-aea7-f45dc1ac6f76" (UID: "663f2b54-4df3-4187-aea7-f45dc1ac6f76"). InnerVolumeSpecName "kube-api-access-8bf4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.516015 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "663f2b54-4df3-4187-aea7-f45dc1ac6f76" (UID: "663f2b54-4df3-4187-aea7-f45dc1ac6f76"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.518397 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "663f2b54-4df3-4187-aea7-f45dc1ac6f76" (UID: "663f2b54-4df3-4187-aea7-f45dc1ac6f76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.534817 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-config-data" (OuterVolumeSpecName: "config-data") pod "663f2b54-4df3-4187-aea7-f45dc1ac6f76" (UID: "663f2b54-4df3-4187-aea7-f45dc1ac6f76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.585832 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/663f2b54-4df3-4187-aea7-f45dc1ac6f76-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.586046 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bf4z\" (UniqueName: \"kubernetes.io/projected/663f2b54-4df3-4187-aea7-f45dc1ac6f76-kube-api-access-8bf4z\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.586132 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.586198 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.586290 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663f2b54-4df3-4187-aea7-f45dc1ac6f76-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.880317 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"663f2b54-4df3-4187-aea7-f45dc1ac6f76","Type":"ContainerDied","Data":"93e1a14d87573c1a56a9d58b1930ba891c0c0c2dcfaed13938f25f323b3cc17a"} Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.880386 4814 scope.go:117] "RemoveContainer" containerID="6249ef8654323e54ae5d3e7c8ae98ef46e209103a09c844327bbd8c7c924659f" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.880539 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.960064 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:51:42 crc kubenswrapper[4814]: I1202 14:51:42.969882 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:51:44 crc kubenswrapper[4814]: I1202 14:51:44.155065 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="663f2b54-4df3-4187-aea7-f45dc1ac6f76" path="/var/lib/kubelet/pods/663f2b54-4df3-4187-aea7-f45dc1ac6f76/volumes" Dec 02 14:51:44 crc kubenswrapper[4814]: I1202 14:51:44.905402 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"45e1e4d7-bd31-4615-88c9-f28b59112c39","Type":"ContainerStarted","Data":"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4"} Dec 02 14:51:44 crc kubenswrapper[4814]: I1202 14:51:44.905532 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="ceilometer-central-agent" containerID="cri-o://36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7" gracePeriod=30 Dec 02 14:51:44 crc kubenswrapper[4814]: I1202 14:51:44.905571 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:44 crc kubenswrapper[4814]: I1202 14:51:44.905619 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="proxy-httpd" containerID="cri-o://1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4" gracePeriod=30 Dec 02 14:51:44 crc kubenswrapper[4814]: I1202 14:51:44.905653 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="sg-core" containerID="cri-o://50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a" gracePeriod=30 Dec 02 14:51:44 crc kubenswrapper[4814]: I1202 14:51:44.905684 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="ceilometer-notification-agent" containerID="cri-o://1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996" gracePeriod=30 Dec 02 14:51:44 crc kubenswrapper[4814]: I1202 14:51:44.930077 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.9750553330000002 podStartE2EDuration="7.930054969s" podCreationTimestamp="2025-12-02 14:51:37 +0000 UTC" firstStartedPulling="2025-12-02 14:51:38.744494573 +0000 UTC m=+1360.868856797" lastFinishedPulling="2025-12-02 14:51:43.699494219 +0000 UTC m=+1365.823856433" observedRunningTime="2025-12-02 14:51:44.924203753 +0000 UTC m=+1367.048565957" watchObservedRunningTime="2025-12-02 14:51:44.930054969 +0000 UTC m=+1367.054417183" Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.875087 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.929065 4814 generic.go:334] "Generic (PLEG): container finished" podID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerID="1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4" exitCode=0 Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.929581 4814 generic.go:334] "Generic (PLEG): container finished" podID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerID="50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a" exitCode=2 Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.929663 4814 generic.go:334] "Generic (PLEG): container finished" podID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerID="1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996" exitCode=0 Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.929741 4814 generic.go:334] "Generic (PLEG): container finished" podID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerID="36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7" exitCode=0 Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.929851 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"45e1e4d7-bd31-4615-88c9-f28b59112c39","Type":"ContainerDied","Data":"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4"} Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.929975 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"45e1e4d7-bd31-4615-88c9-f28b59112c39","Type":"ContainerDied","Data":"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a"} Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.930081 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"45e1e4d7-bd31-4615-88c9-f28b59112c39","Type":"ContainerDied","Data":"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996"} Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.930175 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"45e1e4d7-bd31-4615-88c9-f28b59112c39","Type":"ContainerDied","Data":"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7"} Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.930292 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"45e1e4d7-bd31-4615-88c9-f28b59112c39","Type":"ContainerDied","Data":"05085770e3f0b6e910d5543cbf8a145ad4a5425563cf17274e680ce1a22f731f"} Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.930410 4814 scope.go:117] "RemoveContainer" containerID="1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4" Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.930741 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.962090 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-scripts\") pod \"45e1e4d7-bd31-4615-88c9-f28b59112c39\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.962163 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-combined-ca-bundle\") pod \"45e1e4d7-bd31-4615-88c9-f28b59112c39\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.962314 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-ceilometer-tls-certs\") pod \"45e1e4d7-bd31-4615-88c9-f28b59112c39\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.962349 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28x9p\" (UniqueName: \"kubernetes.io/projected/45e1e4d7-bd31-4615-88c9-f28b59112c39-kube-api-access-28x9p\") pod \"45e1e4d7-bd31-4615-88c9-f28b59112c39\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.962388 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-config-data\") pod \"45e1e4d7-bd31-4615-88c9-f28b59112c39\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.962448 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45e1e4d7-bd31-4615-88c9-f28b59112c39-log-httpd\") pod \"45e1e4d7-bd31-4615-88c9-f28b59112c39\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.962488 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45e1e4d7-bd31-4615-88c9-f28b59112c39-run-httpd\") pod \"45e1e4d7-bd31-4615-88c9-f28b59112c39\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.962571 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-sg-core-conf-yaml\") pod \"45e1e4d7-bd31-4615-88c9-f28b59112c39\" (UID: \"45e1e4d7-bd31-4615-88c9-f28b59112c39\") " Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.968640 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e1e4d7-bd31-4615-88c9-f28b59112c39-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "45e1e4d7-bd31-4615-88c9-f28b59112c39" (UID: "45e1e4d7-bd31-4615-88c9-f28b59112c39"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.968797 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e1e4d7-bd31-4615-88c9-f28b59112c39-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "45e1e4d7-bd31-4615-88c9-f28b59112c39" (UID: "45e1e4d7-bd31-4615-88c9-f28b59112c39"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.968895 4814 scope.go:117] "RemoveContainer" containerID="50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a" Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.972120 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e1e4d7-bd31-4615-88c9-f28b59112c39-kube-api-access-28x9p" (OuterVolumeSpecName: "kube-api-access-28x9p") pod "45e1e4d7-bd31-4615-88c9-f28b59112c39" (UID: "45e1e4d7-bd31-4615-88c9-f28b59112c39"). InnerVolumeSpecName "kube-api-access-28x9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.972589 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-scripts" (OuterVolumeSpecName: "scripts") pod "45e1e4d7-bd31-4615-88c9-f28b59112c39" (UID: "45e1e4d7-bd31-4615-88c9-f28b59112c39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.986921 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "45e1e4d7-bd31-4615-88c9-f28b59112c39" (UID: "45e1e4d7-bd31-4615-88c9-f28b59112c39"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:45 crc kubenswrapper[4814]: I1202 14:51:45.986985 4814 scope.go:117] "RemoveContainer" containerID="1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.005013 4814 scope.go:117] "RemoveContainer" containerID="36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.012058 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "45e1e4d7-bd31-4615-88c9-f28b59112c39" (UID: "45e1e4d7-bd31-4615-88c9-f28b59112c39"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.029258 4814 scope.go:117] "RemoveContainer" containerID="1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4" Dec 02 14:51:46 crc kubenswrapper[4814]: E1202 14:51:46.029640 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4\": container with ID starting with 1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4 not found: ID does not exist" containerID="1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.029692 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4"} err="failed to get container status \"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4\": rpc error: code = NotFound desc = could not find container \"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4\": container with ID starting with 1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4 not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.029724 4814 scope.go:117] "RemoveContainer" containerID="50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a" Dec 02 14:51:46 crc kubenswrapper[4814]: E1202 14:51:46.030026 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a\": container with ID starting with 50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a not found: ID does not exist" containerID="50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.030130 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a"} err="failed to get container status \"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a\": rpc error: code = NotFound desc = could not find container \"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a\": container with ID starting with 50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.030217 4814 scope.go:117] "RemoveContainer" containerID="1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996" Dec 02 14:51:46 crc kubenswrapper[4814]: E1202 14:51:46.030598 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996\": container with ID starting with 1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996 not found: ID does not exist" containerID="1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.030629 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996"} err="failed to get container status \"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996\": rpc error: code = NotFound desc = could not find container \"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996\": container with ID starting with 1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996 not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.030654 4814 scope.go:117] "RemoveContainer" containerID="36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7" Dec 02 14:51:46 crc kubenswrapper[4814]: E1202 14:51:46.030903 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7\": container with ID starting with 36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7 not found: ID does not exist" containerID="36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.031007 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7"} err="failed to get container status \"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7\": rpc error: code = NotFound desc = could not find container \"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7\": container with ID starting with 36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7 not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.031092 4814 scope.go:117] "RemoveContainer" containerID="1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.031627 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4"} err="failed to get container status \"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4\": rpc error: code = NotFound desc = could not find container \"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4\": container with ID starting with 1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4 not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.031727 4814 scope.go:117] "RemoveContainer" containerID="50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.032041 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a"} err="failed to get container status \"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a\": rpc error: code = NotFound desc = could not find container \"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a\": container with ID starting with 50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.032065 4814 scope.go:117] "RemoveContainer" containerID="1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.032440 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996"} err="failed to get container status \"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996\": rpc error: code = NotFound desc = could not find container \"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996\": container with ID starting with 1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996 not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.032462 4814 scope.go:117] "RemoveContainer" containerID="36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.032669 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7"} err="failed to get container status \"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7\": rpc error: code = NotFound desc = could not find container \"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7\": container with ID starting with 36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7 not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.032691 4814 scope.go:117] "RemoveContainer" containerID="1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.032934 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4"} err="failed to get container status \"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4\": rpc error: code = NotFound desc = could not find container \"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4\": container with ID starting with 1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4 not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.032959 4814 scope.go:117] "RemoveContainer" containerID="50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.033223 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a"} err="failed to get container status \"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a\": rpc error: code = NotFound desc = could not find container \"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a\": container with ID starting with 50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.033257 4814 scope.go:117] "RemoveContainer" containerID="1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.033513 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996"} err="failed to get container status \"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996\": rpc error: code = NotFound desc = could not find container \"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996\": container with ID starting with 1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996 not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.033533 4814 scope.go:117] "RemoveContainer" containerID="36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.033762 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7"} err="failed to get container status \"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7\": rpc error: code = NotFound desc = could not find container \"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7\": container with ID starting with 36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7 not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.033782 4814 scope.go:117] "RemoveContainer" containerID="1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.034033 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4"} err="failed to get container status \"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4\": rpc error: code = NotFound desc = could not find container \"1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4\": container with ID starting with 1a5755f27325e6a9e7a46cb7dabf00fe60234966e30b1e129cfc61fbef3397b4 not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.034054 4814 scope.go:117] "RemoveContainer" containerID="50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.034298 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a"} err="failed to get container status \"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a\": rpc error: code = NotFound desc = could not find container \"50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a\": container with ID starting with 50cc9aaeacdb0b05f8c4f3601cac1723130db45bf4aa216ed16c171c09c0e20a not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.034318 4814 scope.go:117] "RemoveContainer" containerID="1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.034552 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996"} err="failed to get container status \"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996\": rpc error: code = NotFound desc = could not find container \"1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996\": container with ID starting with 1180889589c1677cb92a90b311a48945237adde8741fc3ec6f0b9069ee470996 not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.034567 4814 scope.go:117] "RemoveContainer" containerID="36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.034791 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7"} err="failed to get container status \"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7\": rpc error: code = NotFound desc = could not find container \"36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7\": container with ID starting with 36d35d45695f5668b467128cd57a1969e0e0489b7ea92c27faa87e4346cd0cd7 not found: ID does not exist" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.039675 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45e1e4d7-bd31-4615-88c9-f28b59112c39" (UID: "45e1e4d7-bd31-4615-88c9-f28b59112c39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.058755 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-config-data" (OuterVolumeSpecName: "config-data") pod "45e1e4d7-bd31-4615-88c9-f28b59112c39" (UID: "45e1e4d7-bd31-4615-88c9-f28b59112c39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.064895 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.064925 4814 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45e1e4d7-bd31-4615-88c9-f28b59112c39-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.064934 4814 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45e1e4d7-bd31-4615-88c9-f28b59112c39-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.064947 4814 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.064956 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.064964 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.064973 4814 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e1e4d7-bd31-4615-88c9-f28b59112c39-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.064981 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28x9p\" (UniqueName: \"kubernetes.io/projected/45e1e4d7-bd31-4615-88c9-f28b59112c39-kube-api-access-28x9p\") on node \"crc\" DevicePath \"\"" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.255592 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.262535 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.293044 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:46 crc kubenswrapper[4814]: E1202 14:51:46.293568 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="proxy-httpd" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.293598 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="proxy-httpd" Dec 02 14:51:46 crc kubenswrapper[4814]: E1202 14:51:46.293621 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb717eee-c605-4707-b7f1-0418c1d3bdab" containerName="watcher-applier" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.293634 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb717eee-c605-4707-b7f1-0418c1d3bdab" containerName="watcher-applier" Dec 02 14:51:46 crc kubenswrapper[4814]: E1202 14:51:46.293650 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="sg-core" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.293662 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="sg-core" Dec 02 14:51:46 crc kubenswrapper[4814]: E1202 14:51:46.293718 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="ceilometer-notification-agent" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.293731 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="ceilometer-notification-agent" Dec 02 14:51:46 crc kubenswrapper[4814]: E1202 14:51:46.293755 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="663f2b54-4df3-4187-aea7-f45dc1ac6f76" containerName="watcher-decision-engine" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.293767 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="663f2b54-4df3-4187-aea7-f45dc1ac6f76" containerName="watcher-decision-engine" Dec 02 14:51:46 crc kubenswrapper[4814]: E1202 14:51:46.293793 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="ceilometer-central-agent" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.293806 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="ceilometer-central-agent" Dec 02 14:51:46 crc kubenswrapper[4814]: E1202 14:51:46.293828 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c4bf33-9bdc-4810-a7e8-2b469cd055e3" containerName="mariadb-account-delete" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.293840 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c4bf33-9bdc-4810-a7e8-2b469cd055e3" containerName="mariadb-account-delete" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.294139 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="sg-core" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.294160 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="proxy-httpd" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.294185 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="ceilometer-notification-agent" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.294206 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c4bf33-9bdc-4810-a7e8-2b469cd055e3" containerName="mariadb-account-delete" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.294277 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb717eee-c605-4707-b7f1-0418c1d3bdab" containerName="watcher-applier" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.294295 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" containerName="ceilometer-central-agent" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.294323 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="663f2b54-4df3-4187-aea7-f45dc1ac6f76" containerName="watcher-decision-engine" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.296084 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.298651 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.298979 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.299039 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.316106 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.370636 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.370690 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.370722 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be9358f4-97c2-48a1-9499-0a7eac196dcf-log-httpd\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.370981 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.371046 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be9358f4-97c2-48a1-9499-0a7eac196dcf-run-httpd\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.371087 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-config-data\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.371375 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd2gv\" (UniqueName: \"kubernetes.io/projected/be9358f4-97c2-48a1-9499-0a7eac196dcf-kube-api-access-vd2gv\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.371465 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-scripts\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.472904 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.472993 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be9358f4-97c2-48a1-9499-0a7eac196dcf-log-httpd\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.473085 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.473119 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be9358f4-97c2-48a1-9499-0a7eac196dcf-run-httpd\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.473157 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-config-data\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.473296 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd2gv\" (UniqueName: \"kubernetes.io/projected/be9358f4-97c2-48a1-9499-0a7eac196dcf-kube-api-access-vd2gv\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.473346 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-scripts\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.473379 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.474028 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be9358f4-97c2-48a1-9499-0a7eac196dcf-run-httpd\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.474517 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be9358f4-97c2-48a1-9499-0a7eac196dcf-log-httpd\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.477848 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.479058 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-config-data\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.479917 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.488528 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-scripts\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.523854 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be9358f4-97c2-48a1-9499-0a7eac196dcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.526448 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd2gv\" (UniqueName: \"kubernetes.io/projected/be9358f4-97c2-48a1-9499-0a7eac196dcf-kube-api-access-vd2gv\") pod \"ceilometer-0\" (UID: \"be9358f4-97c2-48a1-9499-0a7eac196dcf\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:46 crc kubenswrapper[4814]: I1202 14:51:46.611530 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:47 crc kubenswrapper[4814]: I1202 14:51:47.162933 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 14:51:47 crc kubenswrapper[4814]: I1202 14:51:47.949741 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"be9358f4-97c2-48a1-9499-0a7eac196dcf","Type":"ContainerStarted","Data":"9c58867f38bede21d7d177ae0d349c988b90232f376added43816faa6c01d906"} Dec 02 14:51:48 crc kubenswrapper[4814]: I1202 14:51:48.156786 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e1e4d7-bd31-4615-88c9-f28b59112c39" path="/var/lib/kubelet/pods/45e1e4d7-bd31-4615-88c9-f28b59112c39/volumes" Dec 02 14:51:48 crc kubenswrapper[4814]: I1202 14:51:48.973624 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"be9358f4-97c2-48a1-9499-0a7eac196dcf","Type":"ContainerStarted","Data":"88f63665be24c96dfbbb9bc8fbbc6f7931de26ccab4c2b712cde7ab6427c22fa"} Dec 02 14:51:49 crc kubenswrapper[4814]: I1202 14:51:49.986259 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"be9358f4-97c2-48a1-9499-0a7eac196dcf","Type":"ContainerStarted","Data":"2d8e679f70db75dac6eca3daab0844bff24b53ead4b9c7e8c1c3867e669b699b"} Dec 02 14:51:49 crc kubenswrapper[4814]: I1202 14:51:49.986632 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"be9358f4-97c2-48a1-9499-0a7eac196dcf","Type":"ContainerStarted","Data":"29c20c96b33d234ca7b886169098ea96fa0487d21c84085866e20f149db50f5e"} Dec 02 14:51:50 crc kubenswrapper[4814]: I1202 14:51:50.651679 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:51:50 crc kubenswrapper[4814]: I1202 14:51:50.651987 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:51:52 crc kubenswrapper[4814]: I1202 14:51:52.006350 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"be9358f4-97c2-48a1-9499-0a7eac196dcf","Type":"ContainerStarted","Data":"92c468f10ceae3fb031f61d3e78283bf2a2d440696fad72ee2716d0d400304a4"} Dec 02 14:51:52 crc kubenswrapper[4814]: I1202 14:51:52.006798 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:51:52 crc kubenswrapper[4814]: I1202 14:51:52.036745 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.25319917 podStartE2EDuration="6.036721764s" podCreationTimestamp="2025-12-02 14:51:46 +0000 UTC" firstStartedPulling="2025-12-02 14:51:47.154866128 +0000 UTC m=+1369.279228342" lastFinishedPulling="2025-12-02 14:51:50.938388732 +0000 UTC m=+1373.062750936" observedRunningTime="2025-12-02 14:51:52.033900673 +0000 UTC m=+1374.158262917" watchObservedRunningTime="2025-12-02 14:51:52.036721764 +0000 UTC m=+1374.161083998" Dec 02 14:51:58 crc kubenswrapper[4814]: I1202 14:51:58.989132 4814 scope.go:117] "RemoveContainer" containerID="8246572ebab04f58555cf55adcced0784a792b7cf44f49942710ae6fb80a0528" Dec 02 14:52:08 crc kubenswrapper[4814]: I1202 14:52:08.882919 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-29jzn"] Dec 02 14:52:08 crc kubenswrapper[4814]: I1202 14:52:08.885683 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:08 crc kubenswrapper[4814]: I1202 14:52:08.893553 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-29jzn"] Dec 02 14:52:08 crc kubenswrapper[4814]: I1202 14:52:08.991877 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhtpj\" (UniqueName: \"kubernetes.io/projected/817c7b12-bf78-461b-8772-52e379d3951a-kube-api-access-xhtpj\") pod \"redhat-operators-29jzn\" (UID: \"817c7b12-bf78-461b-8772-52e379d3951a\") " pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:08 crc kubenswrapper[4814]: I1202 14:52:08.991981 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817c7b12-bf78-461b-8772-52e379d3951a-utilities\") pod \"redhat-operators-29jzn\" (UID: \"817c7b12-bf78-461b-8772-52e379d3951a\") " pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:08 crc kubenswrapper[4814]: I1202 14:52:08.992064 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817c7b12-bf78-461b-8772-52e379d3951a-catalog-content\") pod \"redhat-operators-29jzn\" (UID: \"817c7b12-bf78-461b-8772-52e379d3951a\") " pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:09 crc kubenswrapper[4814]: I1202 14:52:09.094158 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817c7b12-bf78-461b-8772-52e379d3951a-utilities\") pod \"redhat-operators-29jzn\" (UID: \"817c7b12-bf78-461b-8772-52e379d3951a\") " pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:09 crc kubenswrapper[4814]: I1202 14:52:09.094249 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817c7b12-bf78-461b-8772-52e379d3951a-catalog-content\") pod \"redhat-operators-29jzn\" (UID: \"817c7b12-bf78-461b-8772-52e379d3951a\") " pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:09 crc kubenswrapper[4814]: I1202 14:52:09.094306 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhtpj\" (UniqueName: \"kubernetes.io/projected/817c7b12-bf78-461b-8772-52e379d3951a-kube-api-access-xhtpj\") pod \"redhat-operators-29jzn\" (UID: \"817c7b12-bf78-461b-8772-52e379d3951a\") " pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:09 crc kubenswrapper[4814]: I1202 14:52:09.094991 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817c7b12-bf78-461b-8772-52e379d3951a-catalog-content\") pod \"redhat-operators-29jzn\" (UID: \"817c7b12-bf78-461b-8772-52e379d3951a\") " pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:09 crc kubenswrapper[4814]: I1202 14:52:09.096158 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817c7b12-bf78-461b-8772-52e379d3951a-utilities\") pod \"redhat-operators-29jzn\" (UID: \"817c7b12-bf78-461b-8772-52e379d3951a\") " pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:09 crc kubenswrapper[4814]: I1202 14:52:09.114359 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhtpj\" (UniqueName: \"kubernetes.io/projected/817c7b12-bf78-461b-8772-52e379d3951a-kube-api-access-xhtpj\") pod \"redhat-operators-29jzn\" (UID: \"817c7b12-bf78-461b-8772-52e379d3951a\") " pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:09 crc kubenswrapper[4814]: I1202 14:52:09.226661 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:09 crc kubenswrapper[4814]: I1202 14:52:09.675716 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-29jzn"] Dec 02 14:52:10 crc kubenswrapper[4814]: I1202 14:52:10.161966 4814 generic.go:334] "Generic (PLEG): container finished" podID="817c7b12-bf78-461b-8772-52e379d3951a" containerID="0647c71e57228018840302a7dc252607d5c59f621dd2ba97a4a30d18696351a6" exitCode=0 Dec 02 14:52:10 crc kubenswrapper[4814]: I1202 14:52:10.162017 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29jzn" event={"ID":"817c7b12-bf78-461b-8772-52e379d3951a","Type":"ContainerDied","Data":"0647c71e57228018840302a7dc252607d5c59f621dd2ba97a4a30d18696351a6"} Dec 02 14:52:10 crc kubenswrapper[4814]: I1202 14:52:10.162040 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29jzn" event={"ID":"817c7b12-bf78-461b-8772-52e379d3951a","Type":"ContainerStarted","Data":"c90adabb520324cad38da63b7231e38e6bb9e8cb645e5b83570304c147638179"} Dec 02 14:52:12 crc kubenswrapper[4814]: I1202 14:52:12.186686 4814 generic.go:334] "Generic (PLEG): container finished" podID="817c7b12-bf78-461b-8772-52e379d3951a" containerID="f2e48f9f656464d501a1e7c15529607f336e634eeb9cb8050b1ec60528c2181e" exitCode=0 Dec 02 14:52:12 crc kubenswrapper[4814]: I1202 14:52:12.186741 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29jzn" event={"ID":"817c7b12-bf78-461b-8772-52e379d3951a","Type":"ContainerDied","Data":"f2e48f9f656464d501a1e7c15529607f336e634eeb9cb8050b1ec60528c2181e"} Dec 02 14:52:13 crc kubenswrapper[4814]: I1202 14:52:13.198510 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29jzn" event={"ID":"817c7b12-bf78-461b-8772-52e379d3951a","Type":"ContainerStarted","Data":"8c263ed4dca5568e2eb559a0ac79e5c2677bcbfe0ebc7951457d9763daa94edf"} Dec 02 14:52:13 crc kubenswrapper[4814]: I1202 14:52:13.227357 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-29jzn" podStartSLOduration=2.773669541 podStartE2EDuration="5.227335863s" podCreationTimestamp="2025-12-02 14:52:08 +0000 UTC" firstStartedPulling="2025-12-02 14:52:10.16354104 +0000 UTC m=+1392.287903234" lastFinishedPulling="2025-12-02 14:52:12.617207312 +0000 UTC m=+1394.741569556" observedRunningTime="2025-12-02 14:52:13.218592095 +0000 UTC m=+1395.342954299" watchObservedRunningTime="2025-12-02 14:52:13.227335863 +0000 UTC m=+1395.351698107" Dec 02 14:52:16 crc kubenswrapper[4814]: I1202 14:52:16.702017 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Dec 02 14:52:19 crc kubenswrapper[4814]: I1202 14:52:19.227473 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:19 crc kubenswrapper[4814]: I1202 14:52:19.227856 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:19 crc kubenswrapper[4814]: I1202 14:52:19.308185 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:19 crc kubenswrapper[4814]: I1202 14:52:19.373311 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:20 crc kubenswrapper[4814]: I1202 14:52:20.651569 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:52:20 crc kubenswrapper[4814]: I1202 14:52:20.651939 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:52:20 crc kubenswrapper[4814]: I1202 14:52:20.652001 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:52:20 crc kubenswrapper[4814]: I1202 14:52:20.652939 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5dc327b43bfefefc885573889c7f1e87316bb1d715cad4948d07c7ce0116b628"} pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:52:20 crc kubenswrapper[4814]: I1202 14:52:20.653027 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" containerID="cri-o://5dc327b43bfefefc885573889c7f1e87316bb1d715cad4948d07c7ce0116b628" gracePeriod=600 Dec 02 14:52:22 crc kubenswrapper[4814]: I1202 14:52:22.287673 4814 generic.go:334] "Generic (PLEG): container finished" podID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerID="5dc327b43bfefefc885573889c7f1e87316bb1d715cad4948d07c7ce0116b628" exitCode=0 Dec 02 14:52:22 crc kubenswrapper[4814]: I1202 14:52:22.287745 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerDied","Data":"5dc327b43bfefefc885573889c7f1e87316bb1d715cad4948d07c7ce0116b628"} Dec 02 14:52:22 crc kubenswrapper[4814]: I1202 14:52:22.287980 4814 scope.go:117] "RemoveContainer" containerID="1a790dc6cfa4af470ca8102d397fb8348715084a7eb2e984ef5303da30379ee9" Dec 02 14:52:22 crc kubenswrapper[4814]: I1202 14:52:22.860661 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-29jzn"] Dec 02 14:52:22 crc kubenswrapper[4814]: I1202 14:52:22.860995 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-29jzn" podUID="817c7b12-bf78-461b-8772-52e379d3951a" containerName="registry-server" containerID="cri-o://8c263ed4dca5568e2eb559a0ac79e5c2677bcbfe0ebc7951457d9763daa94edf" gracePeriod=2 Dec 02 14:52:24 crc kubenswrapper[4814]: I1202 14:52:24.323386 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa"} Dec 02 14:52:25 crc kubenswrapper[4814]: I1202 14:52:25.338596 4814 generic.go:334] "Generic (PLEG): container finished" podID="817c7b12-bf78-461b-8772-52e379d3951a" containerID="8c263ed4dca5568e2eb559a0ac79e5c2677bcbfe0ebc7951457d9763daa94edf" exitCode=0 Dec 02 14:52:25 crc kubenswrapper[4814]: I1202 14:52:25.338715 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29jzn" event={"ID":"817c7b12-bf78-461b-8772-52e379d3951a","Type":"ContainerDied","Data":"8c263ed4dca5568e2eb559a0ac79e5c2677bcbfe0ebc7951457d9763daa94edf"} Dec 02 14:52:25 crc kubenswrapper[4814]: I1202 14:52:25.593479 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:25 crc kubenswrapper[4814]: I1202 14:52:25.692054 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817c7b12-bf78-461b-8772-52e379d3951a-utilities\") pod \"817c7b12-bf78-461b-8772-52e379d3951a\" (UID: \"817c7b12-bf78-461b-8772-52e379d3951a\") " Dec 02 14:52:25 crc kubenswrapper[4814]: I1202 14:52:25.692309 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhtpj\" (UniqueName: \"kubernetes.io/projected/817c7b12-bf78-461b-8772-52e379d3951a-kube-api-access-xhtpj\") pod \"817c7b12-bf78-461b-8772-52e379d3951a\" (UID: \"817c7b12-bf78-461b-8772-52e379d3951a\") " Dec 02 14:52:25 crc kubenswrapper[4814]: I1202 14:52:25.692391 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817c7b12-bf78-461b-8772-52e379d3951a-catalog-content\") pod \"817c7b12-bf78-461b-8772-52e379d3951a\" (UID: \"817c7b12-bf78-461b-8772-52e379d3951a\") " Dec 02 14:52:25 crc kubenswrapper[4814]: I1202 14:52:25.693102 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/817c7b12-bf78-461b-8772-52e379d3951a-utilities" (OuterVolumeSpecName: "utilities") pod "817c7b12-bf78-461b-8772-52e379d3951a" (UID: "817c7b12-bf78-461b-8772-52e379d3951a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:52:25 crc kubenswrapper[4814]: I1202 14:52:25.697836 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817c7b12-bf78-461b-8772-52e379d3951a-kube-api-access-xhtpj" (OuterVolumeSpecName: "kube-api-access-xhtpj") pod "817c7b12-bf78-461b-8772-52e379d3951a" (UID: "817c7b12-bf78-461b-8772-52e379d3951a"). InnerVolumeSpecName "kube-api-access-xhtpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:52:25 crc kubenswrapper[4814]: I1202 14:52:25.794061 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhtpj\" (UniqueName: \"kubernetes.io/projected/817c7b12-bf78-461b-8772-52e379d3951a-kube-api-access-xhtpj\") on node \"crc\" DevicePath \"\"" Dec 02 14:52:25 crc kubenswrapper[4814]: I1202 14:52:25.794100 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817c7b12-bf78-461b-8772-52e379d3951a-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:52:25 crc kubenswrapper[4814]: I1202 14:52:25.797590 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/817c7b12-bf78-461b-8772-52e379d3951a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "817c7b12-bf78-461b-8772-52e379d3951a" (UID: "817c7b12-bf78-461b-8772-52e379d3951a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:52:25 crc kubenswrapper[4814]: I1202 14:52:25.895940 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817c7b12-bf78-461b-8772-52e379d3951a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:52:26 crc kubenswrapper[4814]: I1202 14:52:26.349577 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29jzn" event={"ID":"817c7b12-bf78-461b-8772-52e379d3951a","Type":"ContainerDied","Data":"c90adabb520324cad38da63b7231e38e6bb9e8cb645e5b83570304c147638179"} Dec 02 14:52:26 crc kubenswrapper[4814]: I1202 14:52:26.349613 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29jzn" Dec 02 14:52:26 crc kubenswrapper[4814]: I1202 14:52:26.349961 4814 scope.go:117] "RemoveContainer" containerID="8c263ed4dca5568e2eb559a0ac79e5c2677bcbfe0ebc7951457d9763daa94edf" Dec 02 14:52:26 crc kubenswrapper[4814]: I1202 14:52:26.377867 4814 scope.go:117] "RemoveContainer" containerID="f2e48f9f656464d501a1e7c15529607f336e634eeb9cb8050b1ec60528c2181e" Dec 02 14:52:26 crc kubenswrapper[4814]: I1202 14:52:26.385365 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-29jzn"] Dec 02 14:52:26 crc kubenswrapper[4814]: I1202 14:52:26.392305 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-29jzn"] Dec 02 14:52:26 crc kubenswrapper[4814]: I1202 14:52:26.414206 4814 scope.go:117] "RemoveContainer" containerID="0647c71e57228018840302a7dc252607d5c59f621dd2ba97a4a30d18696351a6" Dec 02 14:52:28 crc kubenswrapper[4814]: I1202 14:52:28.162424 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="817c7b12-bf78-461b-8772-52e379d3951a" path="/var/lib/kubelet/pods/817c7b12-bf78-461b-8772-52e379d3951a/volumes" Dec 02 14:53:59 crc kubenswrapper[4814]: I1202 14:53:59.218224 4814 scope.go:117] "RemoveContainer" containerID="0f33ea1fc7a1d6ec1b9f4ee3215d2415f4d23a1fb74688276f005f160bd933d9" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.281185 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bbvfb"] Dec 02 14:54:34 crc kubenswrapper[4814]: E1202 14:54:34.282364 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817c7b12-bf78-461b-8772-52e379d3951a" containerName="extract-utilities" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.282385 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="817c7b12-bf78-461b-8772-52e379d3951a" containerName="extract-utilities" Dec 02 14:54:34 crc kubenswrapper[4814]: E1202 14:54:34.282413 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817c7b12-bf78-461b-8772-52e379d3951a" containerName="extract-content" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.282425 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="817c7b12-bf78-461b-8772-52e379d3951a" containerName="extract-content" Dec 02 14:54:34 crc kubenswrapper[4814]: E1202 14:54:34.282451 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817c7b12-bf78-461b-8772-52e379d3951a" containerName="registry-server" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.282461 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="817c7b12-bf78-461b-8772-52e379d3951a" containerName="registry-server" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.282720 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="817c7b12-bf78-461b-8772-52e379d3951a" containerName="registry-server" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.284993 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.298128 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bbvfb"] Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.367192 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-catalog-content\") pod \"community-operators-bbvfb\" (UID: \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\") " pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.367253 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-utilities\") pod \"community-operators-bbvfb\" (UID: \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\") " pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.367389 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzgvs\" (UniqueName: \"kubernetes.io/projected/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-kube-api-access-tzgvs\") pod \"community-operators-bbvfb\" (UID: \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\") " pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.468555 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-catalog-content\") pod \"community-operators-bbvfb\" (UID: \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\") " pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.468606 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-utilities\") pod \"community-operators-bbvfb\" (UID: \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\") " pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.468634 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzgvs\" (UniqueName: \"kubernetes.io/projected/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-kube-api-access-tzgvs\") pod \"community-operators-bbvfb\" (UID: \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\") " pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.468995 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-catalog-content\") pod \"community-operators-bbvfb\" (UID: \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\") " pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.469011 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-utilities\") pod \"community-operators-bbvfb\" (UID: \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\") " pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.494254 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzgvs\" (UniqueName: \"kubernetes.io/projected/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-kube-api-access-tzgvs\") pod \"community-operators-bbvfb\" (UID: \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\") " pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:34 crc kubenswrapper[4814]: I1202 14:54:34.626639 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:35 crc kubenswrapper[4814]: I1202 14:54:35.156015 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bbvfb"] Dec 02 14:54:35 crc kubenswrapper[4814]: I1202 14:54:35.505385 4814 generic.go:334] "Generic (PLEG): container finished" podID="cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" containerID="f0a3a7ad81e8bf25519b8c9f6c3ee436806e7788768b48af08864e914b1a751b" exitCode=0 Dec 02 14:54:35 crc kubenswrapper[4814]: I1202 14:54:35.505472 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbvfb" event={"ID":"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf","Type":"ContainerDied","Data":"f0a3a7ad81e8bf25519b8c9f6c3ee436806e7788768b48af08864e914b1a751b"} Dec 02 14:54:35 crc kubenswrapper[4814]: I1202 14:54:35.505542 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbvfb" event={"ID":"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf","Type":"ContainerStarted","Data":"d472c7a1316b0918c276285d4a5b6d253dc501855e62e1a4caee9329b956fc12"} Dec 02 14:54:35 crc kubenswrapper[4814]: I1202 14:54:35.507307 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:54:37 crc kubenswrapper[4814]: I1202 14:54:37.520277 4814 generic.go:334] "Generic (PLEG): container finished" podID="cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" containerID="56a1cd05e12386330c6367fee38c7ef2440dd1ffae942c2ec96d4e0d15b176ba" exitCode=0 Dec 02 14:54:37 crc kubenswrapper[4814]: I1202 14:54:37.520326 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbvfb" event={"ID":"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf","Type":"ContainerDied","Data":"56a1cd05e12386330c6367fee38c7ef2440dd1ffae942c2ec96d4e0d15b176ba"} Dec 02 14:54:38 crc kubenswrapper[4814]: I1202 14:54:38.530125 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbvfb" event={"ID":"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf","Type":"ContainerStarted","Data":"961023d85670998db9553c4de3d1114de76ef0e02616cd0346218bf73732a798"} Dec 02 14:54:38 crc kubenswrapper[4814]: I1202 14:54:38.556792 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bbvfb" podStartSLOduration=2.082948132 podStartE2EDuration="4.55677441s" podCreationTimestamp="2025-12-02 14:54:34 +0000 UTC" firstStartedPulling="2025-12-02 14:54:35.506969303 +0000 UTC m=+1537.631331517" lastFinishedPulling="2025-12-02 14:54:37.980795591 +0000 UTC m=+1540.105157795" observedRunningTime="2025-12-02 14:54:38.553370595 +0000 UTC m=+1540.677732819" watchObservedRunningTime="2025-12-02 14:54:38.55677441 +0000 UTC m=+1540.681136634" Dec 02 14:54:44 crc kubenswrapper[4814]: I1202 14:54:44.626833 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:44 crc kubenswrapper[4814]: I1202 14:54:44.627914 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:44 crc kubenswrapper[4814]: I1202 14:54:44.686583 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:45 crc kubenswrapper[4814]: I1202 14:54:45.638625 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:48 crc kubenswrapper[4814]: I1202 14:54:48.259344 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bbvfb"] Dec 02 14:54:48 crc kubenswrapper[4814]: I1202 14:54:48.605774 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bbvfb" podUID="cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" containerName="registry-server" containerID="cri-o://961023d85670998db9553c4de3d1114de76ef0e02616cd0346218bf73732a798" gracePeriod=2 Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.546379 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.605980 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzgvs\" (UniqueName: \"kubernetes.io/projected/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-kube-api-access-tzgvs\") pod \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\" (UID: \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\") " Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.607059 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-catalog-content\") pod \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\" (UID: \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\") " Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.607127 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-utilities\") pod \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\" (UID: \"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf\") " Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.609145 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-utilities" (OuterVolumeSpecName: "utilities") pod "cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" (UID: "cefbb117-3f2d-4a2c-8bea-548cbe8cffbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.614988 4814 generic.go:334] "Generic (PLEG): container finished" podID="cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" containerID="961023d85670998db9553c4de3d1114de76ef0e02616cd0346218bf73732a798" exitCode=0 Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.615317 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbvfb" event={"ID":"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf","Type":"ContainerDied","Data":"961023d85670998db9553c4de3d1114de76ef0e02616cd0346218bf73732a798"} Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.615349 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbvfb" event={"ID":"cefbb117-3f2d-4a2c-8bea-548cbe8cffbf","Type":"ContainerDied","Data":"d472c7a1316b0918c276285d4a5b6d253dc501855e62e1a4caee9329b956fc12"} Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.615370 4814 scope.go:117] "RemoveContainer" containerID="961023d85670998db9553c4de3d1114de76ef0e02616cd0346218bf73732a798" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.615539 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbvfb" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.617720 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-kube-api-access-tzgvs" (OuterVolumeSpecName: "kube-api-access-tzgvs") pod "cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" (UID: "cefbb117-3f2d-4a2c-8bea-548cbe8cffbf"). InnerVolumeSpecName "kube-api-access-tzgvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.660051 4814 scope.go:117] "RemoveContainer" containerID="56a1cd05e12386330c6367fee38c7ef2440dd1ffae942c2ec96d4e0d15b176ba" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.695263 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" (UID: "cefbb117-3f2d-4a2c-8bea-548cbe8cffbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.696988 4814 scope.go:117] "RemoveContainer" containerID="f0a3a7ad81e8bf25519b8c9f6c3ee436806e7788768b48af08864e914b1a751b" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.709539 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzgvs\" (UniqueName: \"kubernetes.io/projected/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-kube-api-access-tzgvs\") on node \"crc\" DevicePath \"\"" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.709570 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.709579 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.720778 4814 scope.go:117] "RemoveContainer" containerID="961023d85670998db9553c4de3d1114de76ef0e02616cd0346218bf73732a798" Dec 02 14:54:49 crc kubenswrapper[4814]: E1202 14:54:49.721363 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"961023d85670998db9553c4de3d1114de76ef0e02616cd0346218bf73732a798\": container with ID starting with 961023d85670998db9553c4de3d1114de76ef0e02616cd0346218bf73732a798 not found: ID does not exist" containerID="961023d85670998db9553c4de3d1114de76ef0e02616cd0346218bf73732a798" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.721420 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"961023d85670998db9553c4de3d1114de76ef0e02616cd0346218bf73732a798"} err="failed to get container status \"961023d85670998db9553c4de3d1114de76ef0e02616cd0346218bf73732a798\": rpc error: code = NotFound desc = could not find container \"961023d85670998db9553c4de3d1114de76ef0e02616cd0346218bf73732a798\": container with ID starting with 961023d85670998db9553c4de3d1114de76ef0e02616cd0346218bf73732a798 not found: ID does not exist" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.721457 4814 scope.go:117] "RemoveContainer" containerID="56a1cd05e12386330c6367fee38c7ef2440dd1ffae942c2ec96d4e0d15b176ba" Dec 02 14:54:49 crc kubenswrapper[4814]: E1202 14:54:49.721868 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a1cd05e12386330c6367fee38c7ef2440dd1ffae942c2ec96d4e0d15b176ba\": container with ID starting with 56a1cd05e12386330c6367fee38c7ef2440dd1ffae942c2ec96d4e0d15b176ba not found: ID does not exist" containerID="56a1cd05e12386330c6367fee38c7ef2440dd1ffae942c2ec96d4e0d15b176ba" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.721908 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a1cd05e12386330c6367fee38c7ef2440dd1ffae942c2ec96d4e0d15b176ba"} err="failed to get container status \"56a1cd05e12386330c6367fee38c7ef2440dd1ffae942c2ec96d4e0d15b176ba\": rpc error: code = NotFound desc = could not find container \"56a1cd05e12386330c6367fee38c7ef2440dd1ffae942c2ec96d4e0d15b176ba\": container with ID starting with 56a1cd05e12386330c6367fee38c7ef2440dd1ffae942c2ec96d4e0d15b176ba not found: ID does not exist" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.721941 4814 scope.go:117] "RemoveContainer" containerID="f0a3a7ad81e8bf25519b8c9f6c3ee436806e7788768b48af08864e914b1a751b" Dec 02 14:54:49 crc kubenswrapper[4814]: E1202 14:54:49.722263 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0a3a7ad81e8bf25519b8c9f6c3ee436806e7788768b48af08864e914b1a751b\": container with ID starting with f0a3a7ad81e8bf25519b8c9f6c3ee436806e7788768b48af08864e914b1a751b not found: ID does not exist" containerID="f0a3a7ad81e8bf25519b8c9f6c3ee436806e7788768b48af08864e914b1a751b" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.722296 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0a3a7ad81e8bf25519b8c9f6c3ee436806e7788768b48af08864e914b1a751b"} err="failed to get container status \"f0a3a7ad81e8bf25519b8c9f6c3ee436806e7788768b48af08864e914b1a751b\": rpc error: code = NotFound desc = could not find container \"f0a3a7ad81e8bf25519b8c9f6c3ee436806e7788768b48af08864e914b1a751b\": container with ID starting with f0a3a7ad81e8bf25519b8c9f6c3ee436806e7788768b48af08864e914b1a751b not found: ID does not exist" Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.946749 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bbvfb"] Dec 02 14:54:49 crc kubenswrapper[4814]: I1202 14:54:49.956035 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bbvfb"] Dec 02 14:54:50 crc kubenswrapper[4814]: I1202 14:54:50.155510 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" path="/var/lib/kubelet/pods/cefbb117-3f2d-4a2c-8bea-548cbe8cffbf/volumes" Dec 02 14:54:50 crc kubenswrapper[4814]: I1202 14:54:50.651052 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:54:50 crc kubenswrapper[4814]: I1202 14:54:50.652275 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.476444 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t89c6"] Dec 02 14:55:10 crc kubenswrapper[4814]: E1202 14:55:10.483897 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" containerName="registry-server" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.484133 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" containerName="registry-server" Dec 02 14:55:10 crc kubenswrapper[4814]: E1202 14:55:10.484176 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" containerName="extract-utilities" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.484188 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" containerName="extract-utilities" Dec 02 14:55:10 crc kubenswrapper[4814]: E1202 14:55:10.484205 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" containerName="extract-content" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.484215 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" containerName="extract-content" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.484427 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="cefbb117-3f2d-4a2c-8bea-548cbe8cffbf" containerName="registry-server" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.485902 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.491859 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t89c6"] Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.538547 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-utilities\") pod \"redhat-marketplace-t89c6\" (UID: \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\") " pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.538616 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-catalog-content\") pod \"redhat-marketplace-t89c6\" (UID: \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\") " pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.538728 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lvjt\" (UniqueName: \"kubernetes.io/projected/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-kube-api-access-5lvjt\") pod \"redhat-marketplace-t89c6\" (UID: \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\") " pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.639796 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-utilities\") pod \"redhat-marketplace-t89c6\" (UID: \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\") " pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.639865 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-catalog-content\") pod \"redhat-marketplace-t89c6\" (UID: \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\") " pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.639938 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lvjt\" (UniqueName: \"kubernetes.io/projected/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-kube-api-access-5lvjt\") pod \"redhat-marketplace-t89c6\" (UID: \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\") " pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.640719 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-utilities\") pod \"redhat-marketplace-t89c6\" (UID: \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\") " pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.640786 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-catalog-content\") pod \"redhat-marketplace-t89c6\" (UID: \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\") " pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.676152 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lvjt\" (UniqueName: \"kubernetes.io/projected/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-kube-api-access-5lvjt\") pod \"redhat-marketplace-t89c6\" (UID: \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\") " pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:10 crc kubenswrapper[4814]: I1202 14:55:10.811794 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:11 crc kubenswrapper[4814]: I1202 14:55:11.369614 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t89c6"] Dec 02 14:55:11 crc kubenswrapper[4814]: I1202 14:55:11.804933 4814 generic.go:334] "Generic (PLEG): container finished" podID="9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" containerID="9c757f99734c0263464cf84a7c327d6a6f4fcec5275fccac1d1b42c80df6026b" exitCode=0 Dec 02 14:55:11 crc kubenswrapper[4814]: I1202 14:55:11.804980 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t89c6" event={"ID":"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990","Type":"ContainerDied","Data":"9c757f99734c0263464cf84a7c327d6a6f4fcec5275fccac1d1b42c80df6026b"} Dec 02 14:55:11 crc kubenswrapper[4814]: I1202 14:55:11.805310 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t89c6" event={"ID":"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990","Type":"ContainerStarted","Data":"4feea6bad840a97b5aa428b5156a97992ace67f5b45ec5ba37c37069c117d117"} Dec 02 14:55:12 crc kubenswrapper[4814]: I1202 14:55:12.815448 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t89c6" event={"ID":"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990","Type":"ContainerStarted","Data":"606b8dcc591ed023a431a919bb6b55c76cad5f4e0f6355191c25ba2ef5482c98"} Dec 02 14:55:13 crc kubenswrapper[4814]: I1202 14:55:13.825187 4814 generic.go:334] "Generic (PLEG): container finished" podID="9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" containerID="606b8dcc591ed023a431a919bb6b55c76cad5f4e0f6355191c25ba2ef5482c98" exitCode=0 Dec 02 14:55:13 crc kubenswrapper[4814]: I1202 14:55:13.825255 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t89c6" event={"ID":"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990","Type":"ContainerDied","Data":"606b8dcc591ed023a431a919bb6b55c76cad5f4e0f6355191c25ba2ef5482c98"} Dec 02 14:55:14 crc kubenswrapper[4814]: I1202 14:55:14.848314 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t89c6" event={"ID":"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990","Type":"ContainerStarted","Data":"bc8b761732cc694ce3aae4359c1ee8faf374ed8627be46041191ffbed88fa84c"} Dec 02 14:55:14 crc kubenswrapper[4814]: I1202 14:55:14.879134 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t89c6" podStartSLOduration=2.398967787 podStartE2EDuration="4.879113473s" podCreationTimestamp="2025-12-02 14:55:10 +0000 UTC" firstStartedPulling="2025-12-02 14:55:11.806721993 +0000 UTC m=+1573.931084207" lastFinishedPulling="2025-12-02 14:55:14.286867689 +0000 UTC m=+1576.411229893" observedRunningTime="2025-12-02 14:55:14.873719419 +0000 UTC m=+1576.998081633" watchObservedRunningTime="2025-12-02 14:55:14.879113473 +0000 UTC m=+1577.003475687" Dec 02 14:55:20 crc kubenswrapper[4814]: I1202 14:55:20.650809 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:55:20 crc kubenswrapper[4814]: I1202 14:55:20.651257 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:55:20 crc kubenswrapper[4814]: I1202 14:55:20.812612 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:20 crc kubenswrapper[4814]: I1202 14:55:20.812661 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:20 crc kubenswrapper[4814]: I1202 14:55:20.877716 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:20 crc kubenswrapper[4814]: I1202 14:55:20.956777 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:25 crc kubenswrapper[4814]: I1202 14:55:25.256443 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t89c6"] Dec 02 14:55:25 crc kubenswrapper[4814]: I1202 14:55:25.256902 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t89c6" podUID="9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" containerName="registry-server" containerID="cri-o://bc8b761732cc694ce3aae4359c1ee8faf374ed8627be46041191ffbed88fa84c" gracePeriod=2 Dec 02 14:55:25 crc kubenswrapper[4814]: I1202 14:55:25.960618 4814 generic.go:334] "Generic (PLEG): container finished" podID="9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" containerID="bc8b761732cc694ce3aae4359c1ee8faf374ed8627be46041191ffbed88fa84c" exitCode=0 Dec 02 14:55:25 crc kubenswrapper[4814]: I1202 14:55:25.961187 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t89c6" event={"ID":"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990","Type":"ContainerDied","Data":"bc8b761732cc694ce3aae4359c1ee8faf374ed8627be46041191ffbed88fa84c"} Dec 02 14:55:26 crc kubenswrapper[4814]: I1202 14:55:26.278341 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:26 crc kubenswrapper[4814]: I1202 14:55:26.331870 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lvjt\" (UniqueName: \"kubernetes.io/projected/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-kube-api-access-5lvjt\") pod \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\" (UID: \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\") " Dec 02 14:55:26 crc kubenswrapper[4814]: I1202 14:55:26.333255 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-catalog-content\") pod \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\" (UID: \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\") " Dec 02 14:55:26 crc kubenswrapper[4814]: I1202 14:55:26.333542 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-utilities\") pod \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\" (UID: \"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990\") " Dec 02 14:55:26 crc kubenswrapper[4814]: I1202 14:55:26.335013 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-utilities" (OuterVolumeSpecName: "utilities") pod "9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" (UID: "9e80a3d3-fd25-49a9-a80d-c9a4fb26f990"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:55:26 crc kubenswrapper[4814]: I1202 14:55:26.346492 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-kube-api-access-5lvjt" (OuterVolumeSpecName: "kube-api-access-5lvjt") pod "9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" (UID: "9e80a3d3-fd25-49a9-a80d-c9a4fb26f990"). InnerVolumeSpecName "kube-api-access-5lvjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:55:26 crc kubenswrapper[4814]: I1202 14:55:26.358526 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" (UID: "9e80a3d3-fd25-49a9-a80d-c9a4fb26f990"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:55:26 crc kubenswrapper[4814]: I1202 14:55:26.436159 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:55:26 crc kubenswrapper[4814]: I1202 14:55:26.436458 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lvjt\" (UniqueName: \"kubernetes.io/projected/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-kube-api-access-5lvjt\") on node \"crc\" DevicePath \"\"" Dec 02 14:55:26 crc kubenswrapper[4814]: I1202 14:55:26.436563 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:55:26 crc kubenswrapper[4814]: I1202 14:55:26.971929 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t89c6" event={"ID":"9e80a3d3-fd25-49a9-a80d-c9a4fb26f990","Type":"ContainerDied","Data":"4feea6bad840a97b5aa428b5156a97992ace67f5b45ec5ba37c37069c117d117"} Dec 02 14:55:26 crc kubenswrapper[4814]: I1202 14:55:26.971990 4814 scope.go:117] "RemoveContainer" containerID="bc8b761732cc694ce3aae4359c1ee8faf374ed8627be46041191ffbed88fa84c" Dec 02 14:55:26 crc kubenswrapper[4814]: I1202 14:55:26.972188 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t89c6" Dec 02 14:55:27 crc kubenswrapper[4814]: I1202 14:55:27.005603 4814 scope.go:117] "RemoveContainer" containerID="606b8dcc591ed023a431a919bb6b55c76cad5f4e0f6355191c25ba2ef5482c98" Dec 02 14:55:27 crc kubenswrapper[4814]: I1202 14:55:27.016298 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t89c6"] Dec 02 14:55:27 crc kubenswrapper[4814]: I1202 14:55:27.025818 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t89c6"] Dec 02 14:55:27 crc kubenswrapper[4814]: I1202 14:55:27.033195 4814 scope.go:117] "RemoveContainer" containerID="9c757f99734c0263464cf84a7c327d6a6f4fcec5275fccac1d1b42c80df6026b" Dec 02 14:55:28 crc kubenswrapper[4814]: I1202 14:55:28.165087 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" path="/var/lib/kubelet/pods/9e80a3d3-fd25-49a9-a80d-c9a4fb26f990/volumes" Dec 02 14:55:48 crc kubenswrapper[4814]: I1202 14:55:48.630898 4814 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wmlq8 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 14:55:48 crc kubenswrapper[4814]: I1202 14:55:48.634338 4814 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wmlq8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 14:55:48 crc kubenswrapper[4814]: I1202 14:55:48.635740 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" podUID="cd894607-26ea-4312-98b1-99c3cdcd522a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:55:48 crc kubenswrapper[4814]: I1202 14:55:48.635559 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-wmlq8" podUID="cd894607-26ea-4312-98b1-99c3cdcd522a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:55:50 crc kubenswrapper[4814]: I1202 14:55:50.651319 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:55:50 crc kubenswrapper[4814]: I1202 14:55:50.651630 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:55:50 crc kubenswrapper[4814]: I1202 14:55:50.651669 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 14:55:50 crc kubenswrapper[4814]: I1202 14:55:50.652220 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa"} pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:55:50 crc kubenswrapper[4814]: I1202 14:55:50.652290 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" containerID="cri-o://d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" gracePeriod=600 Dec 02 14:55:50 crc kubenswrapper[4814]: E1202 14:55:50.772573 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:55:51 crc kubenswrapper[4814]: I1202 14:55:51.700246 4814 generic.go:334] "Generic (PLEG): container finished" podID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" exitCode=0 Dec 02 14:55:51 crc kubenswrapper[4814]: I1202 14:55:51.700290 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerDied","Data":"d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa"} Dec 02 14:55:51 crc kubenswrapper[4814]: I1202 14:55:51.700325 4814 scope.go:117] "RemoveContainer" containerID="5dc327b43bfefefc885573889c7f1e87316bb1d715cad4948d07c7ce0116b628" Dec 02 14:55:51 crc kubenswrapper[4814]: I1202 14:55:51.700930 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:55:51 crc kubenswrapper[4814]: E1202 14:55:51.701205 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:56:05 crc kubenswrapper[4814]: I1202 14:56:05.146298 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:56:05 crc kubenswrapper[4814]: E1202 14:56:05.147115 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:56:20 crc kubenswrapper[4814]: I1202 14:56:20.144640 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:56:20 crc kubenswrapper[4814]: E1202 14:56:20.146183 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:56:35 crc kubenswrapper[4814]: I1202 14:56:35.145172 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:56:35 crc kubenswrapper[4814]: E1202 14:56:35.146048 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:56:38 crc kubenswrapper[4814]: I1202 14:56:38.917187 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-wwthl"] Dec 02 14:56:38 crc kubenswrapper[4814]: E1202 14:56:38.917865 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" containerName="extract-content" Dec 02 14:56:38 crc kubenswrapper[4814]: I1202 14:56:38.917877 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" containerName="extract-content" Dec 02 14:56:38 crc kubenswrapper[4814]: E1202 14:56:38.917888 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" containerName="extract-utilities" Dec 02 14:56:38 crc kubenswrapper[4814]: I1202 14:56:38.917894 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" containerName="extract-utilities" Dec 02 14:56:38 crc kubenswrapper[4814]: E1202 14:56:38.917906 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" containerName="registry-server" Dec 02 14:56:38 crc kubenswrapper[4814]: I1202 14:56:38.917912 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" containerName="registry-server" Dec 02 14:56:38 crc kubenswrapper[4814]: I1202 14:56:38.918057 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e80a3d3-fd25-49a9-a80d-c9a4fb26f990" containerName="registry-server" Dec 02 14:56:38 crc kubenswrapper[4814]: I1202 14:56:38.918620 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wwthl" Dec 02 14:56:38 crc kubenswrapper[4814]: I1202 14:56:38.927111 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wwthl"] Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.008688 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-f774-account-create-update-nmns6"] Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.009677 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.011833 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.022558 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-f774-account-create-update-nmns6"] Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.042273 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8792bd78-30e1-463d-bbb1-3c330c5500ce-operator-scripts\") pod \"watcher-db-create-wwthl\" (UID: \"8792bd78-30e1-463d-bbb1-3c330c5500ce\") " pod="watcher-kuttl-default/watcher-db-create-wwthl" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.042373 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6k9t\" (UniqueName: \"kubernetes.io/projected/8792bd78-30e1-463d-bbb1-3c330c5500ce-kube-api-access-p6k9t\") pod \"watcher-db-create-wwthl\" (UID: \"8792bd78-30e1-463d-bbb1-3c330c5500ce\") " pod="watcher-kuttl-default/watcher-db-create-wwthl" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.145821 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8792bd78-30e1-463d-bbb1-3c330c5500ce-operator-scripts\") pod \"watcher-db-create-wwthl\" (UID: \"8792bd78-30e1-463d-bbb1-3c330c5500ce\") " pod="watcher-kuttl-default/watcher-db-create-wwthl" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.145128 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8792bd78-30e1-463d-bbb1-3c330c5500ce-operator-scripts\") pod \"watcher-db-create-wwthl\" (UID: \"8792bd78-30e1-463d-bbb1-3c330c5500ce\") " pod="watcher-kuttl-default/watcher-db-create-wwthl" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.146195 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6k9t\" (UniqueName: \"kubernetes.io/projected/8792bd78-30e1-463d-bbb1-3c330c5500ce-kube-api-access-p6k9t\") pod \"watcher-db-create-wwthl\" (UID: \"8792bd78-30e1-463d-bbb1-3c330c5500ce\") " pod="watcher-kuttl-default/watcher-db-create-wwthl" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.146275 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd623c5f-e23a-4914-a707-d4c67b3caccd-operator-scripts\") pod \"watcher-f774-account-create-update-nmns6\" (UID: \"cd623c5f-e23a-4914-a707-d4c67b3caccd\") " pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.146342 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jztls\" (UniqueName: \"kubernetes.io/projected/cd623c5f-e23a-4914-a707-d4c67b3caccd-kube-api-access-jztls\") pod \"watcher-f774-account-create-update-nmns6\" (UID: \"cd623c5f-e23a-4914-a707-d4c67b3caccd\") " pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.182531 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6k9t\" (UniqueName: \"kubernetes.io/projected/8792bd78-30e1-463d-bbb1-3c330c5500ce-kube-api-access-p6k9t\") pod \"watcher-db-create-wwthl\" (UID: \"8792bd78-30e1-463d-bbb1-3c330c5500ce\") " pod="watcher-kuttl-default/watcher-db-create-wwthl" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.247648 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd623c5f-e23a-4914-a707-d4c67b3caccd-operator-scripts\") pod \"watcher-f774-account-create-update-nmns6\" (UID: \"cd623c5f-e23a-4914-a707-d4c67b3caccd\") " pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.247702 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jztls\" (UniqueName: \"kubernetes.io/projected/cd623c5f-e23a-4914-a707-d4c67b3caccd-kube-api-access-jztls\") pod \"watcher-f774-account-create-update-nmns6\" (UID: \"cd623c5f-e23a-4914-a707-d4c67b3caccd\") " pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.247992 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wwthl" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.248686 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd623c5f-e23a-4914-a707-d4c67b3caccd-operator-scripts\") pod \"watcher-f774-account-create-update-nmns6\" (UID: \"cd623c5f-e23a-4914-a707-d4c67b3caccd\") " pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.268893 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jztls\" (UniqueName: \"kubernetes.io/projected/cd623c5f-e23a-4914-a707-d4c67b3caccd-kube-api-access-jztls\") pod \"watcher-f774-account-create-update-nmns6\" (UID: \"cd623c5f-e23a-4914-a707-d4c67b3caccd\") " pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.323328 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.713551 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wwthl"] Dec 02 14:56:39 crc kubenswrapper[4814]: I1202 14:56:39.811645 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-f774-account-create-update-nmns6"] Dec 02 14:56:39 crc kubenswrapper[4814]: W1202 14:56:39.835547 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd623c5f_e23a_4914_a707_d4c67b3caccd.slice/crio-fa940edcb89b3588860ef2ccf97cc5100b3703ce2b5015806c43ff430fc33c04 WatchSource:0}: Error finding container fa940edcb89b3588860ef2ccf97cc5100b3703ce2b5015806c43ff430fc33c04: Status 404 returned error can't find the container with id fa940edcb89b3588860ef2ccf97cc5100b3703ce2b5015806c43ff430fc33c04 Dec 02 14:56:40 crc kubenswrapper[4814]: I1202 14:56:40.155603 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-wwthl" event={"ID":"8792bd78-30e1-463d-bbb1-3c330c5500ce","Type":"ContainerStarted","Data":"8f2c70b6e17fc31fc4a291ca17fadeed772c02e55ac6cab2636fc120c9cb0a37"} Dec 02 14:56:40 crc kubenswrapper[4814]: I1202 14:56:40.155634 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-wwthl" event={"ID":"8792bd78-30e1-463d-bbb1-3c330c5500ce","Type":"ContainerStarted","Data":"b416d7808712e28c5f6883ab20312b52e8728f5d52e2b6604b9deb3a379cffd4"} Dec 02 14:56:40 crc kubenswrapper[4814]: I1202 14:56:40.155643 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" event={"ID":"cd623c5f-e23a-4914-a707-d4c67b3caccd","Type":"ContainerStarted","Data":"07456a08ba900247c581217281c5f48adaa0ce2edaf83475e654efc2dcc9f734"} Dec 02 14:56:40 crc kubenswrapper[4814]: I1202 14:56:40.155652 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" event={"ID":"cd623c5f-e23a-4914-a707-d4c67b3caccd","Type":"ContainerStarted","Data":"fa940edcb89b3588860ef2ccf97cc5100b3703ce2b5015806c43ff430fc33c04"} Dec 02 14:56:40 crc kubenswrapper[4814]: I1202 14:56:40.200848 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" podStartSLOduration=2.200827466 podStartE2EDuration="2.200827466s" podCreationTimestamp="2025-12-02 14:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:56:40.197539024 +0000 UTC m=+1662.321901228" watchObservedRunningTime="2025-12-02 14:56:40.200827466 +0000 UTC m=+1662.325189660" Dec 02 14:56:40 crc kubenswrapper[4814]: I1202 14:56:40.203358 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db-create-wwthl" podStartSLOduration=2.203351349 podStartE2EDuration="2.203351349s" podCreationTimestamp="2025-12-02 14:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:56:40.182830417 +0000 UTC m=+1662.307192621" watchObservedRunningTime="2025-12-02 14:56:40.203351349 +0000 UTC m=+1662.327713553" Dec 02 14:56:41 crc kubenswrapper[4814]: I1202 14:56:41.165088 4814 generic.go:334] "Generic (PLEG): container finished" podID="cd623c5f-e23a-4914-a707-d4c67b3caccd" containerID="07456a08ba900247c581217281c5f48adaa0ce2edaf83475e654efc2dcc9f734" exitCode=0 Dec 02 14:56:41 crc kubenswrapper[4814]: I1202 14:56:41.165148 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" event={"ID":"cd623c5f-e23a-4914-a707-d4c67b3caccd","Type":"ContainerDied","Data":"07456a08ba900247c581217281c5f48adaa0ce2edaf83475e654efc2dcc9f734"} Dec 02 14:56:41 crc kubenswrapper[4814]: I1202 14:56:41.167859 4814 generic.go:334] "Generic (PLEG): container finished" podID="8792bd78-30e1-463d-bbb1-3c330c5500ce" containerID="8f2c70b6e17fc31fc4a291ca17fadeed772c02e55ac6cab2636fc120c9cb0a37" exitCode=0 Dec 02 14:56:41 crc kubenswrapper[4814]: I1202 14:56:41.167944 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-wwthl" event={"ID":"8792bd78-30e1-463d-bbb1-3c330c5500ce","Type":"ContainerDied","Data":"8f2c70b6e17fc31fc4a291ca17fadeed772c02e55ac6cab2636fc120c9cb0a37"} Dec 02 14:56:42 crc kubenswrapper[4814]: I1202 14:56:42.557547 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wwthl" Dec 02 14:56:42 crc kubenswrapper[4814]: I1202 14:56:42.634140 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" Dec 02 14:56:42 crc kubenswrapper[4814]: I1202 14:56:42.705288 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8792bd78-30e1-463d-bbb1-3c330c5500ce-operator-scripts\") pod \"8792bd78-30e1-463d-bbb1-3c330c5500ce\" (UID: \"8792bd78-30e1-463d-bbb1-3c330c5500ce\") " Dec 02 14:56:42 crc kubenswrapper[4814]: I1202 14:56:42.705418 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6k9t\" (UniqueName: \"kubernetes.io/projected/8792bd78-30e1-463d-bbb1-3c330c5500ce-kube-api-access-p6k9t\") pod \"8792bd78-30e1-463d-bbb1-3c330c5500ce\" (UID: \"8792bd78-30e1-463d-bbb1-3c330c5500ce\") " Dec 02 14:56:42 crc kubenswrapper[4814]: I1202 14:56:42.707810 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8792bd78-30e1-463d-bbb1-3c330c5500ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8792bd78-30e1-463d-bbb1-3c330c5500ce" (UID: "8792bd78-30e1-463d-bbb1-3c330c5500ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:56:42 crc kubenswrapper[4814]: I1202 14:56:42.712828 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8792bd78-30e1-463d-bbb1-3c330c5500ce-kube-api-access-p6k9t" (OuterVolumeSpecName: "kube-api-access-p6k9t") pod "8792bd78-30e1-463d-bbb1-3c330c5500ce" (UID: "8792bd78-30e1-463d-bbb1-3c330c5500ce"). InnerVolumeSpecName "kube-api-access-p6k9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:56:42 crc kubenswrapper[4814]: I1202 14:56:42.806665 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jztls\" (UniqueName: \"kubernetes.io/projected/cd623c5f-e23a-4914-a707-d4c67b3caccd-kube-api-access-jztls\") pod \"cd623c5f-e23a-4914-a707-d4c67b3caccd\" (UID: \"cd623c5f-e23a-4914-a707-d4c67b3caccd\") " Dec 02 14:56:42 crc kubenswrapper[4814]: I1202 14:56:42.806802 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd623c5f-e23a-4914-a707-d4c67b3caccd-operator-scripts\") pod \"cd623c5f-e23a-4914-a707-d4c67b3caccd\" (UID: \"cd623c5f-e23a-4914-a707-d4c67b3caccd\") " Dec 02 14:56:42 crc kubenswrapper[4814]: I1202 14:56:42.807298 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6k9t\" (UniqueName: \"kubernetes.io/projected/8792bd78-30e1-463d-bbb1-3c330c5500ce-kube-api-access-p6k9t\") on node \"crc\" DevicePath \"\"" Dec 02 14:56:42 crc kubenswrapper[4814]: I1202 14:56:42.807303 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd623c5f-e23a-4914-a707-d4c67b3caccd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd623c5f-e23a-4914-a707-d4c67b3caccd" (UID: "cd623c5f-e23a-4914-a707-d4c67b3caccd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:56:42 crc kubenswrapper[4814]: I1202 14:56:42.807334 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8792bd78-30e1-463d-bbb1-3c330c5500ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:56:42 crc kubenswrapper[4814]: I1202 14:56:42.811290 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd623c5f-e23a-4914-a707-d4c67b3caccd-kube-api-access-jztls" (OuterVolumeSpecName: "kube-api-access-jztls") pod "cd623c5f-e23a-4914-a707-d4c67b3caccd" (UID: "cd623c5f-e23a-4914-a707-d4c67b3caccd"). InnerVolumeSpecName "kube-api-access-jztls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:56:42 crc kubenswrapper[4814]: I1202 14:56:42.908798 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jztls\" (UniqueName: \"kubernetes.io/projected/cd623c5f-e23a-4914-a707-d4c67b3caccd-kube-api-access-jztls\") on node \"crc\" DevicePath \"\"" Dec 02 14:56:42 crc kubenswrapper[4814]: I1202 14:56:42.908835 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd623c5f-e23a-4914-a707-d4c67b3caccd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:56:43 crc kubenswrapper[4814]: I1202 14:56:43.188509 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" event={"ID":"cd623c5f-e23a-4914-a707-d4c67b3caccd","Type":"ContainerDied","Data":"fa940edcb89b3588860ef2ccf97cc5100b3703ce2b5015806c43ff430fc33c04"} Dec 02 14:56:43 crc kubenswrapper[4814]: I1202 14:56:43.188548 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa940edcb89b3588860ef2ccf97cc5100b3703ce2b5015806c43ff430fc33c04" Dec 02 14:56:43 crc kubenswrapper[4814]: I1202 14:56:43.188528 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-f774-account-create-update-nmns6" Dec 02 14:56:43 crc kubenswrapper[4814]: I1202 14:56:43.190212 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-wwthl" event={"ID":"8792bd78-30e1-463d-bbb1-3c330c5500ce","Type":"ContainerDied","Data":"b416d7808712e28c5f6883ab20312b52e8728f5d52e2b6604b9deb3a379cffd4"} Dec 02 14:56:43 crc kubenswrapper[4814]: I1202 14:56:43.190380 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b416d7808712e28c5f6883ab20312b52e8728f5d52e2b6604b9deb3a379cffd4" Dec 02 14:56:43 crc kubenswrapper[4814]: I1202 14:56:43.190297 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wwthl" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.243575 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vppx"] Dec 02 14:56:44 crc kubenswrapper[4814]: E1202 14:56:44.244244 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd623c5f-e23a-4914-a707-d4c67b3caccd" containerName="mariadb-account-create-update" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.244261 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd623c5f-e23a-4914-a707-d4c67b3caccd" containerName="mariadb-account-create-update" Dec 02 14:56:44 crc kubenswrapper[4814]: E1202 14:56:44.244281 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8792bd78-30e1-463d-bbb1-3c330c5500ce" containerName="mariadb-database-create" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.244290 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="8792bd78-30e1-463d-bbb1-3c330c5500ce" containerName="mariadb-database-create" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.244483 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="8792bd78-30e1-463d-bbb1-3c330c5500ce" containerName="mariadb-database-create" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.244498 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd623c5f-e23a-4914-a707-d4c67b3caccd" containerName="mariadb-account-create-update" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.245127 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.246904 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.248719 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-jn48l" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.252928 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vppx"] Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.328087 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-config-data\") pod \"watcher-kuttl-db-sync-9vppx\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.328355 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-9vppx\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.328446 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgnfs\" (UniqueName: \"kubernetes.io/projected/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-kube-api-access-pgnfs\") pod \"watcher-kuttl-db-sync-9vppx\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.328529 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-db-sync-config-data\") pod \"watcher-kuttl-db-sync-9vppx\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.429821 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-config-data\") pod \"watcher-kuttl-db-sync-9vppx\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.430217 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-9vppx\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.430340 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgnfs\" (UniqueName: \"kubernetes.io/projected/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-kube-api-access-pgnfs\") pod \"watcher-kuttl-db-sync-9vppx\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.430513 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-db-sync-config-data\") pod \"watcher-kuttl-db-sync-9vppx\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.442484 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-9vppx\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.442630 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-db-sync-config-data\") pod \"watcher-kuttl-db-sync-9vppx\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.442992 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-config-data\") pod \"watcher-kuttl-db-sync-9vppx\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.446087 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgnfs\" (UniqueName: \"kubernetes.io/projected/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-kube-api-access-pgnfs\") pod \"watcher-kuttl-db-sync-9vppx\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:44 crc kubenswrapper[4814]: I1202 14:56:44.595564 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:45 crc kubenswrapper[4814]: I1202 14:56:45.069979 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vppx"] Dec 02 14:56:45 crc kubenswrapper[4814]: I1202 14:56:45.208770 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" event={"ID":"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62","Type":"ContainerStarted","Data":"a917c9f9a5f85e33b15e1b3e15ac077a7c5f3d8b2b64c877dcc3b4d935ecf205"} Dec 02 14:56:46 crc kubenswrapper[4814]: I1202 14:56:46.228438 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" event={"ID":"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62","Type":"ContainerStarted","Data":"5967076ccf25083a4bd2fabab9342a998c5512ac75d353838dfda406344752f6"} Dec 02 14:56:46 crc kubenswrapper[4814]: I1202 14:56:46.250782 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" podStartSLOduration=2.250763308 podStartE2EDuration="2.250763308s" podCreationTimestamp="2025-12-02 14:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:56:46.245661631 +0000 UTC m=+1668.370023835" watchObservedRunningTime="2025-12-02 14:56:46.250763308 +0000 UTC m=+1668.375125512" Dec 02 14:56:47 crc kubenswrapper[4814]: I1202 14:56:47.144312 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:56:47 crc kubenswrapper[4814]: E1202 14:56:47.145165 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:56:48 crc kubenswrapper[4814]: I1202 14:56:48.255271 4814 generic.go:334] "Generic (PLEG): container finished" podID="a32376a2-39d6-4f3f-bf60-7d6ca2fcff62" containerID="5967076ccf25083a4bd2fabab9342a998c5512ac75d353838dfda406344752f6" exitCode=0 Dec 02 14:56:48 crc kubenswrapper[4814]: I1202 14:56:48.255333 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" event={"ID":"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62","Type":"ContainerDied","Data":"5967076ccf25083a4bd2fabab9342a998c5512ac75d353838dfda406344752f6"} Dec 02 14:56:49 crc kubenswrapper[4814]: I1202 14:56:49.586131 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:49 crc kubenswrapper[4814]: I1202 14:56:49.732280 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-combined-ca-bundle\") pod \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " Dec 02 14:56:49 crc kubenswrapper[4814]: I1202 14:56:49.732636 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgnfs\" (UniqueName: \"kubernetes.io/projected/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-kube-api-access-pgnfs\") pod \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " Dec 02 14:56:49 crc kubenswrapper[4814]: I1202 14:56:49.732714 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-db-sync-config-data\") pod \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " Dec 02 14:56:49 crc kubenswrapper[4814]: I1202 14:56:49.732832 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-config-data\") pod \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\" (UID: \"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62\") " Dec 02 14:56:49 crc kubenswrapper[4814]: I1202 14:56:49.740459 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-kube-api-access-pgnfs" (OuterVolumeSpecName: "kube-api-access-pgnfs") pod "a32376a2-39d6-4f3f-bf60-7d6ca2fcff62" (UID: "a32376a2-39d6-4f3f-bf60-7d6ca2fcff62"). InnerVolumeSpecName "kube-api-access-pgnfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:56:49 crc kubenswrapper[4814]: I1202 14:56:49.744959 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a32376a2-39d6-4f3f-bf60-7d6ca2fcff62" (UID: "a32376a2-39d6-4f3f-bf60-7d6ca2fcff62"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:56:49 crc kubenswrapper[4814]: I1202 14:56:49.761410 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a32376a2-39d6-4f3f-bf60-7d6ca2fcff62" (UID: "a32376a2-39d6-4f3f-bf60-7d6ca2fcff62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:56:49 crc kubenswrapper[4814]: I1202 14:56:49.783083 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-config-data" (OuterVolumeSpecName: "config-data") pod "a32376a2-39d6-4f3f-bf60-7d6ca2fcff62" (UID: "a32376a2-39d6-4f3f-bf60-7d6ca2fcff62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:56:49 crc kubenswrapper[4814]: I1202 14:56:49.834371 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:56:49 crc kubenswrapper[4814]: I1202 14:56:49.834406 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgnfs\" (UniqueName: \"kubernetes.io/projected/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-kube-api-access-pgnfs\") on node \"crc\" DevicePath \"\"" Dec 02 14:56:49 crc kubenswrapper[4814]: I1202 14:56:49.834418 4814 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:56:49 crc kubenswrapper[4814]: I1202 14:56:49.834427 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.272744 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" event={"ID":"a32376a2-39d6-4f3f-bf60-7d6ca2fcff62","Type":"ContainerDied","Data":"a917c9f9a5f85e33b15e1b3e15ac077a7c5f3d8b2b64c877dcc3b4d935ecf205"} Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.272781 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a917c9f9a5f85e33b15e1b3e15ac077a7c5f3d8b2b64c877dcc3b4d935ecf205" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.272799 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vppx" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.565540 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:56:50 crc kubenswrapper[4814]: E1202 14:56:50.565892 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32376a2-39d6-4f3f-bf60-7d6ca2fcff62" containerName="watcher-kuttl-db-sync" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.565914 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32376a2-39d6-4f3f-bf60-7d6ca2fcff62" containerName="watcher-kuttl-db-sync" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.566108 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a32376a2-39d6-4f3f-bf60-7d6ca2fcff62" containerName="watcher-kuttl-db-sync" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.566702 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.568853 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-jn48l" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.570214 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.579400 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.646955 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0918f3-3021-40ee-8300-84a3c6916e3b-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.647020 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d0918f3-3021-40ee-8300-84a3c6916e3b-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.647052 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0918f3-3021-40ee-8300-84a3c6916e3b-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.647099 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpz5l\" (UniqueName: \"kubernetes.io/projected/8d0918f3-3021-40ee-8300-84a3c6916e3b-kube-api-access-bpz5l\") pod \"watcher-kuttl-applier-0\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.649454 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.650747 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.652371 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.656187 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.657455 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.666923 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.668149 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.669429 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.673942 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.682970 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751335 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-logs\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751374 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751393 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnfk9\" (UniqueName: \"kubernetes.io/projected/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-kube-api-access-bnfk9\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751421 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751457 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751480 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751498 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751527 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0918f3-3021-40ee-8300-84a3c6916e3b-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751552 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d0918f3-3021-40ee-8300-84a3c6916e3b-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751568 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751586 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0918f3-3021-40ee-8300-84a3c6916e3b-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751615 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpz5l\" (UniqueName: \"kubernetes.io/projected/8d0918f3-3021-40ee-8300-84a3c6916e3b-kube-api-access-bpz5l\") pod \"watcher-kuttl-applier-0\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751633 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ll8s\" (UniqueName: \"kubernetes.io/projected/02f530f4-1f1c-4de6-8aef-0ac503f89387-kube-api-access-7ll8s\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751650 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751666 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.751696 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f530f4-1f1c-4de6-8aef-0ac503f89387-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.754832 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d0918f3-3021-40ee-8300-84a3c6916e3b-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.771899 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0918f3-3021-40ee-8300-84a3c6916e3b-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.774945 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0918f3-3021-40ee-8300-84a3c6916e3b-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.786206 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpz5l\" (UniqueName: \"kubernetes.io/projected/8d0918f3-3021-40ee-8300-84a3c6916e3b-kube-api-access-bpz5l\") pod \"watcher-kuttl-applier-0\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.853069 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.853126 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.853148 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.853182 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.853220 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ll8s\" (UniqueName: \"kubernetes.io/projected/02f530f4-1f1c-4de6-8aef-0ac503f89387-kube-api-access-7ll8s\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.853251 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.853266 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.853308 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f530f4-1f1c-4de6-8aef-0ac503f89387-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.853346 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-logs\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.853360 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.853376 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnfk9\" (UniqueName: \"kubernetes.io/projected/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-kube-api-access-bnfk9\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.853403 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.854073 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-logs\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.854117 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f530f4-1f1c-4de6-8aef-0ac503f89387-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.860854 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.863840 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.864135 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.864257 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.864809 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.865007 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.866978 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.868793 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.881551 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnfk9\" (UniqueName: \"kubernetes.io/projected/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-kube-api-access-bnfk9\") pod \"watcher-kuttl-api-0\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.886611 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.899668 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ll8s\" (UniqueName: \"kubernetes.io/projected/02f530f4-1f1c-4de6-8aef-0ac503f89387-kube-api-access-7ll8s\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.967726 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:50 crc kubenswrapper[4814]: I1202 14:56:50.984879 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:56:51 crc kubenswrapper[4814]: W1202 14:56:51.364393 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d0918f3_3021_40ee_8300_84a3c6916e3b.slice/crio-780f2732250ad144a02c658ebc658518c08298fc0a2930fac98c1878296b2ab6 WatchSource:0}: Error finding container 780f2732250ad144a02c658ebc658518c08298fc0a2930fac98c1878296b2ab6: Status 404 returned error can't find the container with id 780f2732250ad144a02c658ebc658518c08298fc0a2930fac98c1878296b2ab6 Dec 02 14:56:51 crc kubenswrapper[4814]: I1202 14:56:51.367248 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 14:56:51 crc kubenswrapper[4814]: I1202 14:56:51.426191 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 14:56:51 crc kubenswrapper[4814]: I1202 14:56:51.515415 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 14:56:52 crc kubenswrapper[4814]: I1202 14:56:52.294138 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1","Type":"ContainerStarted","Data":"7160936f52c3dea03466a12c83f00fd0b2c320be504dda159a5a1e7fc033486d"} Dec 02 14:56:52 crc kubenswrapper[4814]: I1202 14:56:52.294519 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1","Type":"ContainerStarted","Data":"0e71b8be9ffc3c072587d25e2828ce19c0737a6866f015a18f332fd496f5a05b"} Dec 02 14:56:52 crc kubenswrapper[4814]: I1202 14:56:52.294540 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:52 crc kubenswrapper[4814]: I1202 14:56:52.294552 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1","Type":"ContainerStarted","Data":"d6e5742bc9d18e0780c4e897ef205ad41d9bb56e63ff0eeaee318b9c93202a8a"} Dec 02 14:56:52 crc kubenswrapper[4814]: I1202 14:56:52.300627 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02f530f4-1f1c-4de6-8aef-0ac503f89387","Type":"ContainerStarted","Data":"10e362a75567749b19a0615a3e3b7a3ad7ffe2793975b1199e0ddfc8b2ada1fb"} Dec 02 14:56:52 crc kubenswrapper[4814]: I1202 14:56:52.300683 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02f530f4-1f1c-4de6-8aef-0ac503f89387","Type":"ContainerStarted","Data":"65723416a31c882153dc70876ccb9cab7a78d47125b4acab8a4cc8edb0958505"} Dec 02 14:56:52 crc kubenswrapper[4814]: I1202 14:56:52.305849 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"8d0918f3-3021-40ee-8300-84a3c6916e3b","Type":"ContainerStarted","Data":"6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77"} Dec 02 14:56:52 crc kubenswrapper[4814]: I1202 14:56:52.305907 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"8d0918f3-3021-40ee-8300-84a3c6916e3b","Type":"ContainerStarted","Data":"780f2732250ad144a02c658ebc658518c08298fc0a2930fac98c1878296b2ab6"} Dec 02 14:56:52 crc kubenswrapper[4814]: I1202 14:56:52.317895 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.317879219 podStartE2EDuration="2.317879219s" podCreationTimestamp="2025-12-02 14:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:56:52.31148742 +0000 UTC m=+1674.435849624" watchObservedRunningTime="2025-12-02 14:56:52.317879219 +0000 UTC m=+1674.442241413" Dec 02 14:56:52 crc kubenswrapper[4814]: I1202 14:56:52.342379 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.3423536 podStartE2EDuration="2.3423536s" podCreationTimestamp="2025-12-02 14:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:56:52.33473524 +0000 UTC m=+1674.459097454" watchObservedRunningTime="2025-12-02 14:56:52.3423536 +0000 UTC m=+1674.466715814" Dec 02 14:56:52 crc kubenswrapper[4814]: I1202 14:56:52.366690 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.366671927 podStartE2EDuration="2.366671927s" podCreationTimestamp="2025-12-02 14:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:56:52.363480248 +0000 UTC m=+1674.487842452" watchObservedRunningTime="2025-12-02 14:56:52.366671927 +0000 UTC m=+1674.491034141" Dec 02 14:56:54 crc kubenswrapper[4814]: I1202 14:56:54.602713 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:55 crc kubenswrapper[4814]: I1202 14:56:55.335287 4814 generic.go:334] "Generic (PLEG): container finished" podID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerID="10e362a75567749b19a0615a3e3b7a3ad7ffe2793975b1199e0ddfc8b2ada1fb" exitCode=1 Dec 02 14:56:55 crc kubenswrapper[4814]: I1202 14:56:55.335346 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02f530f4-1f1c-4de6-8aef-0ac503f89387","Type":"ContainerDied","Data":"10e362a75567749b19a0615a3e3b7a3ad7ffe2793975b1199e0ddfc8b2ada1fb"} Dec 02 14:56:55 crc kubenswrapper[4814]: I1202 14:56:55.336081 4814 scope.go:117] "RemoveContainer" containerID="10e362a75567749b19a0615a3e3b7a3ad7ffe2793975b1199e0ddfc8b2ada1fb" Dec 02 14:56:55 crc kubenswrapper[4814]: I1202 14:56:55.887725 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:56:55 crc kubenswrapper[4814]: I1202 14:56:55.968474 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:56:56 crc kubenswrapper[4814]: I1202 14:56:56.344550 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02f530f4-1f1c-4de6-8aef-0ac503f89387","Type":"ContainerStarted","Data":"c14a3cb27385905a6101276bde9b1a25d3798728e1e5d36b10e46db8dd7f4035"} Dec 02 14:56:58 crc kubenswrapper[4814]: I1202 14:56:58.365600 4814 generic.go:334] "Generic (PLEG): container finished" podID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerID="c14a3cb27385905a6101276bde9b1a25d3798728e1e5d36b10e46db8dd7f4035" exitCode=1 Dec 02 14:56:58 crc kubenswrapper[4814]: I1202 14:56:58.365654 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02f530f4-1f1c-4de6-8aef-0ac503f89387","Type":"ContainerDied","Data":"c14a3cb27385905a6101276bde9b1a25d3798728e1e5d36b10e46db8dd7f4035"} Dec 02 14:56:58 crc kubenswrapper[4814]: I1202 14:56:58.365692 4814 scope.go:117] "RemoveContainer" containerID="10e362a75567749b19a0615a3e3b7a3ad7ffe2793975b1199e0ddfc8b2ada1fb" Dec 02 14:56:58 crc kubenswrapper[4814]: I1202 14:56:58.366399 4814 scope.go:117] "RemoveContainer" containerID="c14a3cb27385905a6101276bde9b1a25d3798728e1e5d36b10e46db8dd7f4035" Dec 02 14:56:58 crc kubenswrapper[4814]: E1202 14:56:58.366682 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:56:59 crc kubenswrapper[4814]: I1202 14:56:59.348517 4814 scope.go:117] "RemoveContainer" containerID="ebc070539c5c7c5efbb726b265158123e733f6c5e0dcbd5fc92a6affdf127e84" Dec 02 14:56:59 crc kubenswrapper[4814]: I1202 14:56:59.377959 4814 scope.go:117] "RemoveContainer" containerID="4fa84db1dbe6c47a044d4c1e1650949e09e074409f5b331888f1743687d5ef4a" Dec 02 14:56:59 crc kubenswrapper[4814]: I1202 14:56:59.417477 4814 scope.go:117] "RemoveContainer" containerID="b378beec2d4b2e1b10b2ab9decd4e32d0356f651fe2fcd57b44500b76a47a033" Dec 02 14:57:00 crc kubenswrapper[4814]: I1202 14:57:00.145629 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:57:00 crc kubenswrapper[4814]: E1202 14:57:00.145901 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:57:00 crc kubenswrapper[4814]: I1202 14:57:00.887565 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:57:00 crc kubenswrapper[4814]: I1202 14:57:00.935622 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:57:00 crc kubenswrapper[4814]: I1202 14:57:00.968703 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:57:00 crc kubenswrapper[4814]: I1202 14:57:00.978994 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:57:00 crc kubenswrapper[4814]: I1202 14:57:00.985288 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:57:00 crc kubenswrapper[4814]: I1202 14:57:00.985361 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:57:00 crc kubenswrapper[4814]: I1202 14:57:00.986309 4814 scope.go:117] "RemoveContainer" containerID="c14a3cb27385905a6101276bde9b1a25d3798728e1e5d36b10e46db8dd7f4035" Dec 02 14:57:00 crc kubenswrapper[4814]: E1202 14:57:00.986722 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:57:01 crc kubenswrapper[4814]: I1202 14:57:01.421387 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 14:57:01 crc kubenswrapper[4814]: I1202 14:57:01.449821 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 14:57:11 crc kubenswrapper[4814]: I1202 14:57:11.145167 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:57:11 crc kubenswrapper[4814]: E1202 14:57:11.146035 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:57:13 crc kubenswrapper[4814]: I1202 14:57:13.144370 4814 scope.go:117] "RemoveContainer" containerID="c14a3cb27385905a6101276bde9b1a25d3798728e1e5d36b10e46db8dd7f4035" Dec 02 14:57:13 crc kubenswrapper[4814]: I1202 14:57:13.522876 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02f530f4-1f1c-4de6-8aef-0ac503f89387","Type":"ContainerStarted","Data":"98fb33727c3dd3b8e32f8445150d32c4de48f11a6ef94ab4ef38544e51baf4b8"} Dec 02 14:57:16 crc kubenswrapper[4814]: I1202 14:57:16.064444 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx"] Dec 02 14:57:16 crc kubenswrapper[4814]: I1202 14:57:16.071478 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-create-xpsml"] Dec 02 14:57:16 crc kubenswrapper[4814]: I1202 14:57:16.079898 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-create-xpsml"] Dec 02 14:57:16 crc kubenswrapper[4814]: I1202 14:57:16.087246 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-29ba-account-create-update-7lxtx"] Dec 02 14:57:16 crc kubenswrapper[4814]: I1202 14:57:16.176857 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a60c924e-e444-400c-b269-3bdc3a931300" path="/var/lib/kubelet/pods/a60c924e-e444-400c-b269-3bdc3a931300/volumes" Dec 02 14:57:16 crc kubenswrapper[4814]: I1202 14:57:16.178402 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d24632ef-0280-4438-91ba-0a50b7495837" path="/var/lib/kubelet/pods/d24632ef-0280-4438-91ba-0a50b7495837/volumes" Dec 02 14:57:16 crc kubenswrapper[4814]: I1202 14:57:16.555297 4814 generic.go:334] "Generic (PLEG): container finished" podID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerID="98fb33727c3dd3b8e32f8445150d32c4de48f11a6ef94ab4ef38544e51baf4b8" exitCode=1 Dec 02 14:57:16 crc kubenswrapper[4814]: I1202 14:57:16.555341 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02f530f4-1f1c-4de6-8aef-0ac503f89387","Type":"ContainerDied","Data":"98fb33727c3dd3b8e32f8445150d32c4de48f11a6ef94ab4ef38544e51baf4b8"} Dec 02 14:57:16 crc kubenswrapper[4814]: I1202 14:57:16.555424 4814 scope.go:117] "RemoveContainer" containerID="c14a3cb27385905a6101276bde9b1a25d3798728e1e5d36b10e46db8dd7f4035" Dec 02 14:57:16 crc kubenswrapper[4814]: I1202 14:57:16.556059 4814 scope.go:117] "RemoveContainer" containerID="98fb33727c3dd3b8e32f8445150d32c4de48f11a6ef94ab4ef38544e51baf4b8" Dec 02 14:57:16 crc kubenswrapper[4814]: E1202 14:57:16.556410 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:57:20 crc kubenswrapper[4814]: I1202 14:57:20.985714 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:57:20 crc kubenswrapper[4814]: I1202 14:57:20.986377 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:57:20 crc kubenswrapper[4814]: I1202 14:57:20.986402 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:57:20 crc kubenswrapper[4814]: I1202 14:57:20.986426 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:57:20 crc kubenswrapper[4814]: I1202 14:57:20.987328 4814 scope.go:117] "RemoveContainer" containerID="98fb33727c3dd3b8e32f8445150d32c4de48f11a6ef94ab4ef38544e51baf4b8" Dec 02 14:57:20 crc kubenswrapper[4814]: E1202 14:57:20.987823 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:57:21 crc kubenswrapper[4814]: I1202 14:57:21.605192 4814 scope.go:117] "RemoveContainer" containerID="98fb33727c3dd3b8e32f8445150d32c4de48f11a6ef94ab4ef38544e51baf4b8" Dec 02 14:57:21 crc kubenswrapper[4814]: E1202 14:57:21.605850 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:57:26 crc kubenswrapper[4814]: I1202 14:57:26.144756 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:57:26 crc kubenswrapper[4814]: E1202 14:57:26.145578 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:57:36 crc kubenswrapper[4814]: I1202 14:57:36.144665 4814 scope.go:117] "RemoveContainer" containerID="98fb33727c3dd3b8e32f8445150d32c4de48f11a6ef94ab4ef38544e51baf4b8" Dec 02 14:57:36 crc kubenswrapper[4814]: I1202 14:57:36.746663 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02f530f4-1f1c-4de6-8aef-0ac503f89387","Type":"ContainerStarted","Data":"d699cb7937404e6ff179f7ee1d3383ff9fb9edf4ed4afcb231e737629ded3aa9"} Dec 02 14:57:39 crc kubenswrapper[4814]: I1202 14:57:39.774013 4814 generic.go:334] "Generic (PLEG): container finished" podID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerID="d699cb7937404e6ff179f7ee1d3383ff9fb9edf4ed4afcb231e737629ded3aa9" exitCode=1 Dec 02 14:57:39 crc kubenswrapper[4814]: I1202 14:57:39.774083 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02f530f4-1f1c-4de6-8aef-0ac503f89387","Type":"ContainerDied","Data":"d699cb7937404e6ff179f7ee1d3383ff9fb9edf4ed4afcb231e737629ded3aa9"} Dec 02 14:57:39 crc kubenswrapper[4814]: I1202 14:57:39.774325 4814 scope.go:117] "RemoveContainer" containerID="98fb33727c3dd3b8e32f8445150d32c4de48f11a6ef94ab4ef38544e51baf4b8" Dec 02 14:57:39 crc kubenswrapper[4814]: I1202 14:57:39.775130 4814 scope.go:117] "RemoveContainer" containerID="d699cb7937404e6ff179f7ee1d3383ff9fb9edf4ed4afcb231e737629ded3aa9" Dec 02 14:57:39 crc kubenswrapper[4814]: E1202 14:57:39.775722 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:57:40 crc kubenswrapper[4814]: I1202 14:57:40.985580 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:57:40 crc kubenswrapper[4814]: I1202 14:57:40.985871 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:57:40 crc kubenswrapper[4814]: I1202 14:57:40.986401 4814 scope.go:117] "RemoveContainer" containerID="d699cb7937404e6ff179f7ee1d3383ff9fb9edf4ed4afcb231e737629ded3aa9" Dec 02 14:57:40 crc kubenswrapper[4814]: E1202 14:57:40.986671 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:57:41 crc kubenswrapper[4814]: I1202 14:57:41.145584 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:57:41 crc kubenswrapper[4814]: E1202 14:57:41.146039 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:57:50 crc kubenswrapper[4814]: I1202 14:57:50.985137 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:57:50 crc kubenswrapper[4814]: I1202 14:57:50.985599 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:57:50 crc kubenswrapper[4814]: I1202 14:57:50.986263 4814 scope.go:117] "RemoveContainer" containerID="d699cb7937404e6ff179f7ee1d3383ff9fb9edf4ed4afcb231e737629ded3aa9" Dec 02 14:57:50 crc kubenswrapper[4814]: E1202 14:57:50.986496 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:57:54 crc kubenswrapper[4814]: I1202 14:57:54.145074 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:57:54 crc kubenswrapper[4814]: E1202 14:57:54.145874 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:57:57 crc kubenswrapper[4814]: I1202 14:57:57.046087 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-76pvq"] Dec 02 14:57:57 crc kubenswrapper[4814]: I1202 14:57:57.056792 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-76pvq"] Dec 02 14:57:58 crc kubenswrapper[4814]: I1202 14:57:58.161851 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c600570c-ddbe-48ab-84d2-7acb9d546f01" path="/var/lib/kubelet/pods/c600570c-ddbe-48ab-84d2-7acb9d546f01/volumes" Dec 02 14:57:59 crc kubenswrapper[4814]: I1202 14:57:59.504881 4814 scope.go:117] "RemoveContainer" containerID="92ecfa864356e251abb405231e6ba7adc0b8d9398318e99e2ec2ee96622585f2" Dec 02 14:57:59 crc kubenswrapper[4814]: I1202 14:57:59.536449 4814 scope.go:117] "RemoveContainer" containerID="cf2469296607bb13cdf4b81c85ef987173b274757267032d6d5b7103f350a034" Dec 02 14:57:59 crc kubenswrapper[4814]: I1202 14:57:59.599082 4814 scope.go:117] "RemoveContainer" containerID="4e896c31d4830d795391b60001c67a4254747c2cd5c23b757d86e95e5fabb70e" Dec 02 14:57:59 crc kubenswrapper[4814]: I1202 14:57:59.642060 4814 scope.go:117] "RemoveContainer" containerID="82e01a2287fc71855bf41ba1dd4aca09709345c5a0e2ed867bc2cfa9b9f7db38" Dec 02 14:57:59 crc kubenswrapper[4814]: I1202 14:57:59.672007 4814 scope.go:117] "RemoveContainer" containerID="7d01f2f77838a309ed7b99de6c14e4248ac2f46ba15901a908cb94e7a925c00c" Dec 02 14:57:59 crc kubenswrapper[4814]: I1202 14:57:59.694997 4814 scope.go:117] "RemoveContainer" containerID="c9856a59876bf8a7941f35988ebea958a5093876e0e732887365d9a167652c22" Dec 02 14:57:59 crc kubenswrapper[4814]: I1202 14:57:59.728458 4814 scope.go:117] "RemoveContainer" containerID="a0efff47d8a70af96abcd4cf592b2366c05f1e66bb3bfc9cb751652dcdf6c084" Dec 02 14:57:59 crc kubenswrapper[4814]: I1202 14:57:59.756464 4814 scope.go:117] "RemoveContainer" containerID="9912f23dabf512cbdf27b5ca6bb44dfe6347d9cfa418606e46674ece9bd65489" Dec 02 14:58:05 crc kubenswrapper[4814]: I1202 14:58:05.145166 4814 scope.go:117] "RemoveContainer" containerID="d699cb7937404e6ff179f7ee1d3383ff9fb9edf4ed4afcb231e737629ded3aa9" Dec 02 14:58:05 crc kubenswrapper[4814]: E1202 14:58:05.145968 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:58:06 crc kubenswrapper[4814]: I1202 14:58:06.145144 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:58:06 crc kubenswrapper[4814]: E1202 14:58:06.145462 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:58:17 crc kubenswrapper[4814]: I1202 14:58:17.145304 4814 scope.go:117] "RemoveContainer" containerID="d699cb7937404e6ff179f7ee1d3383ff9fb9edf4ed4afcb231e737629ded3aa9" Dec 02 14:58:17 crc kubenswrapper[4814]: I1202 14:58:17.145597 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:58:17 crc kubenswrapper[4814]: E1202 14:58:17.145819 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:58:17 crc kubenswrapper[4814]: E1202 14:58:17.145880 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:58:18 crc kubenswrapper[4814]: I1202 14:58:18.057350 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-xfbr9"] Dec 02 14:58:18 crc kubenswrapper[4814]: I1202 14:58:18.068337 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-xfbr9"] Dec 02 14:58:18 crc kubenswrapper[4814]: I1202 14:58:18.156650 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0c692e-42a7-4e89-b387-8215fd02a76f" path="/var/lib/kubelet/pods/eb0c692e-42a7-4e89-b387-8215fd02a76f/volumes" Dec 02 14:58:28 crc kubenswrapper[4814]: I1202 14:58:28.151273 4814 scope.go:117] "RemoveContainer" containerID="d699cb7937404e6ff179f7ee1d3383ff9fb9edf4ed4afcb231e737629ded3aa9" Dec 02 14:58:29 crc kubenswrapper[4814]: I1202 14:58:29.270983 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02f530f4-1f1c-4de6-8aef-0ac503f89387","Type":"ContainerStarted","Data":"2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d"} Dec 02 14:58:30 crc kubenswrapper[4814]: I1202 14:58:30.986102 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:58:30 crc kubenswrapper[4814]: E1202 14:58:30.986791 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d is running failed: container process not found" containerID="2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Dec 02 14:58:30 crc kubenswrapper[4814]: E1202 14:58:30.987400 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d is running failed: container process not found" containerID="2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Dec 02 14:58:30 crc kubenswrapper[4814]: E1202 14:58:30.987918 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d is running failed: container process not found" containerID="2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Dec 02 14:58:30 crc kubenswrapper[4814]: E1202 14:58:30.987960 4814 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d is running failed: container process not found" probeType="Startup" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 14:58:31 crc kubenswrapper[4814]: I1202 14:58:31.291607 4814 generic.go:334] "Generic (PLEG): container finished" podID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerID="2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d" exitCode=1 Dec 02 14:58:31 crc kubenswrapper[4814]: I1202 14:58:31.291762 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02f530f4-1f1c-4de6-8aef-0ac503f89387","Type":"ContainerDied","Data":"2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d"} Dec 02 14:58:31 crc kubenswrapper[4814]: I1202 14:58:31.291984 4814 scope.go:117] "RemoveContainer" containerID="d699cb7937404e6ff179f7ee1d3383ff9fb9edf4ed4afcb231e737629ded3aa9" Dec 02 14:58:31 crc kubenswrapper[4814]: I1202 14:58:31.292749 4814 scope.go:117] "RemoveContainer" containerID="2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d" Dec 02 14:58:31 crc kubenswrapper[4814]: E1202 14:58:31.293154 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:58:32 crc kubenswrapper[4814]: I1202 14:58:32.145282 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:58:32 crc kubenswrapper[4814]: E1202 14:58:32.145534 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:58:40 crc kubenswrapper[4814]: I1202 14:58:40.985730 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:58:40 crc kubenswrapper[4814]: I1202 14:58:40.986986 4814 scope.go:117] "RemoveContainer" containerID="2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d" Dec 02 14:58:40 crc kubenswrapper[4814]: E1202 14:58:40.987429 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:58:44 crc kubenswrapper[4814]: I1202 14:58:44.144941 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:58:44 crc kubenswrapper[4814]: E1202 14:58:44.145488 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:58:50 crc kubenswrapper[4814]: I1202 14:58:50.985377 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:58:50 crc kubenswrapper[4814]: I1202 14:58:50.985871 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 14:58:50 crc kubenswrapper[4814]: I1202 14:58:50.986590 4814 scope.go:117] "RemoveContainer" containerID="2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d" Dec 02 14:58:50 crc kubenswrapper[4814]: E1202 14:58:50.986825 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:58:56 crc kubenswrapper[4814]: I1202 14:58:56.144912 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:58:56 crc kubenswrapper[4814]: E1202 14:58:56.146152 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:58:59 crc kubenswrapper[4814]: I1202 14:58:59.922001 4814 scope.go:117] "RemoveContainer" containerID="79dcc93f3eaf042dc51fe4dd3dbd92cdd4b4a0eb8505cabb36243f675f36954b" Dec 02 14:59:03 crc kubenswrapper[4814]: I1202 14:59:03.144522 4814 scope.go:117] "RemoveContainer" containerID="2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d" Dec 02 14:59:03 crc kubenswrapper[4814]: E1202 14:59:03.145382 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:59:11 crc kubenswrapper[4814]: I1202 14:59:11.145406 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:59:11 crc kubenswrapper[4814]: E1202 14:59:11.146670 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:59:14 crc kubenswrapper[4814]: I1202 14:59:14.145060 4814 scope.go:117] "RemoveContainer" containerID="2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d" Dec 02 14:59:14 crc kubenswrapper[4814]: E1202 14:59:14.145683 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:59:25 crc kubenswrapper[4814]: I1202 14:59:25.144622 4814 scope.go:117] "RemoveContainer" containerID="2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d" Dec 02 14:59:25 crc kubenswrapper[4814]: E1202 14:59:25.145318 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:59:26 crc kubenswrapper[4814]: I1202 14:59:26.145735 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:59:26 crc kubenswrapper[4814]: E1202 14:59:26.146091 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:59:28 crc kubenswrapper[4814]: E1202 14:59:28.211014 4814 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.13:57612->38.102.83.13:39561: write tcp 38.102.83.13:57612->38.102.83.13:39561: write: broken pipe Dec 02 14:59:37 crc kubenswrapper[4814]: I1202 14:59:37.145332 4814 scope.go:117] "RemoveContainer" containerID="2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d" Dec 02 14:59:37 crc kubenswrapper[4814]: E1202 14:59:37.146564 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 14:59:38 crc kubenswrapper[4814]: I1202 14:59:38.148942 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:59:38 crc kubenswrapper[4814]: E1202 14:59:38.149248 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:59:47 crc kubenswrapper[4814]: E1202 14:59:47.935734 4814 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.13:57598->38.102.83.13:39561: write tcp 38.102.83.13:57598->38.102.83.13:39561: write: broken pipe Dec 02 14:59:49 crc kubenswrapper[4814]: I1202 14:59:49.145867 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 14:59:49 crc kubenswrapper[4814]: E1202 14:59:49.146205 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 14:59:52 crc kubenswrapper[4814]: I1202 14:59:52.145245 4814 scope.go:117] "RemoveContainer" containerID="2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d" Dec 02 14:59:53 crc kubenswrapper[4814]: I1202 14:59:53.074846 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02f530f4-1f1c-4de6-8aef-0ac503f89387","Type":"ContainerStarted","Data":"714b51e6383db7286d3aa6db07d911a84337db800a8889cf4fb860a435a19d22"} Dec 02 14:59:56 crc kubenswrapper[4814]: I1202 14:59:56.099107 4814 generic.go:334] "Generic (PLEG): container finished" podID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerID="714b51e6383db7286d3aa6db07d911a84337db800a8889cf4fb860a435a19d22" exitCode=1 Dec 02 14:59:56 crc kubenswrapper[4814]: I1202 14:59:56.099173 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02f530f4-1f1c-4de6-8aef-0ac503f89387","Type":"ContainerDied","Data":"714b51e6383db7286d3aa6db07d911a84337db800a8889cf4fb860a435a19d22"} Dec 02 14:59:56 crc kubenswrapper[4814]: I1202 14:59:56.099563 4814 scope.go:117] "RemoveContainer" containerID="2eaf23b915eb13f1ed0f8ffedebe432f382ba1a2f02a6d9622e1409222ed415d" Dec 02 14:59:56 crc kubenswrapper[4814]: I1202 14:59:56.100486 4814 scope.go:117] "RemoveContainer" containerID="714b51e6383db7286d3aa6db07d911a84337db800a8889cf4fb860a435a19d22" Dec 02 14:59:56 crc kubenswrapper[4814]: E1202 14:59:56.100937 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.161611 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb"] Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.164646 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.167371 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.168420 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.172591 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb"] Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.362170 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf2f9\" (UniqueName: \"kubernetes.io/projected/a0d7152e-41bb-4805-88e2-5e9fa576462a-kube-api-access-sf2f9\") pod \"collect-profiles-29411460-bbdlb\" (UID: \"a0d7152e-41bb-4805-88e2-5e9fa576462a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.362253 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0d7152e-41bb-4805-88e2-5e9fa576462a-config-volume\") pod \"collect-profiles-29411460-bbdlb\" (UID: \"a0d7152e-41bb-4805-88e2-5e9fa576462a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.362384 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0d7152e-41bb-4805-88e2-5e9fa576462a-secret-volume\") pod \"collect-profiles-29411460-bbdlb\" (UID: \"a0d7152e-41bb-4805-88e2-5e9fa576462a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.463771 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf2f9\" (UniqueName: \"kubernetes.io/projected/a0d7152e-41bb-4805-88e2-5e9fa576462a-kube-api-access-sf2f9\") pod \"collect-profiles-29411460-bbdlb\" (UID: \"a0d7152e-41bb-4805-88e2-5e9fa576462a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.463838 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0d7152e-41bb-4805-88e2-5e9fa576462a-config-volume\") pod \"collect-profiles-29411460-bbdlb\" (UID: \"a0d7152e-41bb-4805-88e2-5e9fa576462a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.463915 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0d7152e-41bb-4805-88e2-5e9fa576462a-secret-volume\") pod \"collect-profiles-29411460-bbdlb\" (UID: \"a0d7152e-41bb-4805-88e2-5e9fa576462a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.465883 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0d7152e-41bb-4805-88e2-5e9fa576462a-config-volume\") pod \"collect-profiles-29411460-bbdlb\" (UID: \"a0d7152e-41bb-4805-88e2-5e9fa576462a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.476518 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0d7152e-41bb-4805-88e2-5e9fa576462a-secret-volume\") pod \"collect-profiles-29411460-bbdlb\" (UID: \"a0d7152e-41bb-4805-88e2-5e9fa576462a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.490429 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf2f9\" (UniqueName: \"kubernetes.io/projected/a0d7152e-41bb-4805-88e2-5e9fa576462a-kube-api-access-sf2f9\") pod \"collect-profiles-29411460-bbdlb\" (UID: \"a0d7152e-41bb-4805-88e2-5e9fa576462a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.498488 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.822350 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb"] Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.985853 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.985891 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:00:00 crc kubenswrapper[4814]: I1202 15:00:00.986525 4814 scope.go:117] "RemoveContainer" containerID="714b51e6383db7286d3aa6db07d911a84337db800a8889cf4fb860a435a19d22" Dec 02 15:00:00 crc kubenswrapper[4814]: E1202 15:00:00.986727 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 15:00:01 crc kubenswrapper[4814]: I1202 15:00:01.147411 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" event={"ID":"a0d7152e-41bb-4805-88e2-5e9fa576462a","Type":"ContainerStarted","Data":"0b77aed9ae1ddc5638fc2ece32e838152fe5edb8a39082cbf78d9ac683924c58"} Dec 02 15:00:01 crc kubenswrapper[4814]: I1202 15:00:01.147449 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" event={"ID":"a0d7152e-41bb-4805-88e2-5e9fa576462a","Type":"ContainerStarted","Data":"8aec1aacd19b6a7eea28b95ae98d0def3151b1c2c9d99c19a1f4ab7b042e9294"} Dec 02 15:00:01 crc kubenswrapper[4814]: I1202 15:00:01.165838 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" podStartSLOduration=1.165817193 podStartE2EDuration="1.165817193s" podCreationTimestamp="2025-12-02 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:00:01.159125177 +0000 UTC m=+1863.283487381" watchObservedRunningTime="2025-12-02 15:00:01.165817193 +0000 UTC m=+1863.290179407" Dec 02 15:00:02 crc kubenswrapper[4814]: I1202 15:00:02.163463 4814 generic.go:334] "Generic (PLEG): container finished" podID="a0d7152e-41bb-4805-88e2-5e9fa576462a" containerID="0b77aed9ae1ddc5638fc2ece32e838152fe5edb8a39082cbf78d9ac683924c58" exitCode=0 Dec 02 15:00:02 crc kubenswrapper[4814]: I1202 15:00:02.164675 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" event={"ID":"a0d7152e-41bb-4805-88e2-5e9fa576462a","Type":"ContainerDied","Data":"0b77aed9ae1ddc5638fc2ece32e838152fe5edb8a39082cbf78d9ac683924c58"} Dec 02 15:00:03 crc kubenswrapper[4814]: I1202 15:00:03.567106 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" Dec 02 15:00:03 crc kubenswrapper[4814]: I1202 15:00:03.718819 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0d7152e-41bb-4805-88e2-5e9fa576462a-config-volume\") pod \"a0d7152e-41bb-4805-88e2-5e9fa576462a\" (UID: \"a0d7152e-41bb-4805-88e2-5e9fa576462a\") " Dec 02 15:00:03 crc kubenswrapper[4814]: I1202 15:00:03.719140 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0d7152e-41bb-4805-88e2-5e9fa576462a-secret-volume\") pod \"a0d7152e-41bb-4805-88e2-5e9fa576462a\" (UID: \"a0d7152e-41bb-4805-88e2-5e9fa576462a\") " Dec 02 15:00:03 crc kubenswrapper[4814]: I1202 15:00:03.719262 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf2f9\" (UniqueName: \"kubernetes.io/projected/a0d7152e-41bb-4805-88e2-5e9fa576462a-kube-api-access-sf2f9\") pod \"a0d7152e-41bb-4805-88e2-5e9fa576462a\" (UID: \"a0d7152e-41bb-4805-88e2-5e9fa576462a\") " Dec 02 15:00:03 crc kubenswrapper[4814]: I1202 15:00:03.719568 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0d7152e-41bb-4805-88e2-5e9fa576462a-config-volume" (OuterVolumeSpecName: "config-volume") pod "a0d7152e-41bb-4805-88e2-5e9fa576462a" (UID: "a0d7152e-41bb-4805-88e2-5e9fa576462a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:00:03 crc kubenswrapper[4814]: I1202 15:00:03.724858 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0d7152e-41bb-4805-88e2-5e9fa576462a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a0d7152e-41bb-4805-88e2-5e9fa576462a" (UID: "a0d7152e-41bb-4805-88e2-5e9fa576462a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:00:03 crc kubenswrapper[4814]: I1202 15:00:03.724995 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0d7152e-41bb-4805-88e2-5e9fa576462a-kube-api-access-sf2f9" (OuterVolumeSpecName: "kube-api-access-sf2f9") pod "a0d7152e-41bb-4805-88e2-5e9fa576462a" (UID: "a0d7152e-41bb-4805-88e2-5e9fa576462a"). InnerVolumeSpecName "kube-api-access-sf2f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:00:03 crc kubenswrapper[4814]: I1202 15:00:03.821320 4814 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0d7152e-41bb-4805-88e2-5e9fa576462a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:03 crc kubenswrapper[4814]: I1202 15:00:03.821350 4814 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0d7152e-41bb-4805-88e2-5e9fa576462a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:03 crc kubenswrapper[4814]: I1202 15:00:03.821359 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf2f9\" (UniqueName: \"kubernetes.io/projected/a0d7152e-41bb-4805-88e2-5e9fa576462a-kube-api-access-sf2f9\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:04 crc kubenswrapper[4814]: I1202 15:00:04.145329 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 15:00:04 crc kubenswrapper[4814]: E1202 15:00:04.145815 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:00:04 crc kubenswrapper[4814]: I1202 15:00:04.187680 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" event={"ID":"a0d7152e-41bb-4805-88e2-5e9fa576462a","Type":"ContainerDied","Data":"8aec1aacd19b6a7eea28b95ae98d0def3151b1c2c9d99c19a1f4ab7b042e9294"} Dec 02 15:00:04 crc kubenswrapper[4814]: I1202 15:00:04.187733 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aec1aacd19b6a7eea28b95ae98d0def3151b1c2c9d99c19a1f4ab7b042e9294" Dec 02 15:00:04 crc kubenswrapper[4814]: I1202 15:00:04.187764 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-bbdlb" Dec 02 15:00:13 crc kubenswrapper[4814]: I1202 15:00:13.145502 4814 scope.go:117] "RemoveContainer" containerID="714b51e6383db7286d3aa6db07d911a84337db800a8889cf4fb860a435a19d22" Dec 02 15:00:13 crc kubenswrapper[4814]: E1202 15:00:13.146533 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 15:00:19 crc kubenswrapper[4814]: I1202 15:00:19.145064 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 15:00:19 crc kubenswrapper[4814]: E1202 15:00:19.146274 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:00:20 crc kubenswrapper[4814]: I1202 15:00:20.985653 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:00:20 crc kubenswrapper[4814]: I1202 15:00:20.985972 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:00:20 crc kubenswrapper[4814]: I1202 15:00:20.986548 4814 scope.go:117] "RemoveContainer" containerID="714b51e6383db7286d3aa6db07d911a84337db800a8889cf4fb860a435a19d22" Dec 02 15:00:20 crc kubenswrapper[4814]: E1202 15:00:20.986741 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 15:00:32 crc kubenswrapper[4814]: I1202 15:00:32.145768 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 15:00:32 crc kubenswrapper[4814]: E1202 15:00:32.146657 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:00:36 crc kubenswrapper[4814]: I1202 15:00:36.144762 4814 scope.go:117] "RemoveContainer" containerID="714b51e6383db7286d3aa6db07d911a84337db800a8889cf4fb860a435a19d22" Dec 02 15:00:36 crc kubenswrapper[4814]: E1202 15:00:36.145591 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 15:00:47 crc kubenswrapper[4814]: I1202 15:00:47.144881 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 15:00:47 crc kubenswrapper[4814]: E1202 15:00:47.145815 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:00:49 crc kubenswrapper[4814]: I1202 15:00:49.145331 4814 scope.go:117] "RemoveContainer" containerID="714b51e6383db7286d3aa6db07d911a84337db800a8889cf4fb860a435a19d22" Dec 02 15:00:49 crc kubenswrapper[4814]: E1202 15:00:49.146208 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 15:00:58 crc kubenswrapper[4814]: I1202 15:00:58.152496 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 15:00:58 crc kubenswrapper[4814]: I1202 15:00:58.736348 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"908761a95c0f9f6447a8c26806b20d7573f99def1bcbce124d6957c579039255"} Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.157649 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-cron-29411461-fpt5d"] Dec 02 15:01:00 crc kubenswrapper[4814]: E1202 15:01:00.158580 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d7152e-41bb-4805-88e2-5e9fa576462a" containerName="collect-profiles" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.158596 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d7152e-41bb-4805-88e2-5e9fa576462a" containerName="collect-profiles" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.158791 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0d7152e-41bb-4805-88e2-5e9fa576462a" containerName="collect-profiles" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.160840 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.177820 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-cron-29411461-fpt5d"] Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.274814 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-combined-ca-bundle\") pod \"keystone-cron-29411461-fpt5d\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.274938 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mldjh\" (UniqueName: \"kubernetes.io/projected/526fa3fe-0f34-49ac-b9d6-f580b208dae2-kube-api-access-mldjh\") pod \"keystone-cron-29411461-fpt5d\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.275016 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-fernet-keys\") pod \"keystone-cron-29411461-fpt5d\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.275182 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-config-data\") pod \"keystone-cron-29411461-fpt5d\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.377332 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-combined-ca-bundle\") pod \"keystone-cron-29411461-fpt5d\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.377521 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mldjh\" (UniqueName: \"kubernetes.io/projected/526fa3fe-0f34-49ac-b9d6-f580b208dae2-kube-api-access-mldjh\") pod \"keystone-cron-29411461-fpt5d\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.378099 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-fernet-keys\") pod \"keystone-cron-29411461-fpt5d\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.379387 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-config-data\") pod \"keystone-cron-29411461-fpt5d\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.387334 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-fernet-keys\") pod \"keystone-cron-29411461-fpt5d\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.399532 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-combined-ca-bundle\") pod \"keystone-cron-29411461-fpt5d\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.400311 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mldjh\" (UniqueName: \"kubernetes.io/projected/526fa3fe-0f34-49ac-b9d6-f580b208dae2-kube-api-access-mldjh\") pod \"keystone-cron-29411461-fpt5d\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.411972 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-config-data\") pod \"keystone-cron-29411461-fpt5d\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:00 crc kubenswrapper[4814]: I1202 15:01:00.483099 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:01 crc kubenswrapper[4814]: I1202 15:01:01.073433 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-cron-29411461-fpt5d"] Dec 02 15:01:01 crc kubenswrapper[4814]: W1202 15:01:01.077010 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod526fa3fe_0f34_49ac_b9d6_f580b208dae2.slice/crio-02a83a6138d741c214dd6f77db280d7e0d3b4198c32223244b7c484d68e4e174 WatchSource:0}: Error finding container 02a83a6138d741c214dd6f77db280d7e0d3b4198c32223244b7c484d68e4e174: Status 404 returned error can't find the container with id 02a83a6138d741c214dd6f77db280d7e0d3b4198c32223244b7c484d68e4e174 Dec 02 15:01:01 crc kubenswrapper[4814]: I1202 15:01:01.769027 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" event={"ID":"526fa3fe-0f34-49ac-b9d6-f580b208dae2","Type":"ContainerStarted","Data":"df7df63ec9a836b275bab78d8687ec920d24e6939afb16c0aa07f5df0d9d7e43"} Dec 02 15:01:01 crc kubenswrapper[4814]: I1202 15:01:01.769430 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" event={"ID":"526fa3fe-0f34-49ac-b9d6-f580b208dae2","Type":"ContainerStarted","Data":"02a83a6138d741c214dd6f77db280d7e0d3b4198c32223244b7c484d68e4e174"} Dec 02 15:01:01 crc kubenswrapper[4814]: I1202 15:01:01.800895 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" podStartSLOduration=1.80086426 podStartE2EDuration="1.80086426s" podCreationTimestamp="2025-12-02 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:01:01.791598369 +0000 UTC m=+1923.915960573" watchObservedRunningTime="2025-12-02 15:01:01.80086426 +0000 UTC m=+1923.925226484" Dec 02 15:01:02 crc kubenswrapper[4814]: I1202 15:01:02.145183 4814 scope.go:117] "RemoveContainer" containerID="714b51e6383db7286d3aa6db07d911a84337db800a8889cf4fb860a435a19d22" Dec 02 15:01:02 crc kubenswrapper[4814]: E1202 15:01:02.145547 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 15:01:03 crc kubenswrapper[4814]: I1202 15:01:03.792728 4814 generic.go:334] "Generic (PLEG): container finished" podID="526fa3fe-0f34-49ac-b9d6-f580b208dae2" containerID="df7df63ec9a836b275bab78d8687ec920d24e6939afb16c0aa07f5df0d9d7e43" exitCode=0 Dec 02 15:01:03 crc kubenswrapper[4814]: I1202 15:01:03.792811 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" event={"ID":"526fa3fe-0f34-49ac-b9d6-f580b208dae2","Type":"ContainerDied","Data":"df7df63ec9a836b275bab78d8687ec920d24e6939afb16c0aa07f5df0d9d7e43"} Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.220137 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.389631 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-config-data\") pod \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.389860 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-fernet-keys\") pod \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.390406 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-combined-ca-bundle\") pod \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.390460 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mldjh\" (UniqueName: \"kubernetes.io/projected/526fa3fe-0f34-49ac-b9d6-f580b208dae2-kube-api-access-mldjh\") pod \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\" (UID: \"526fa3fe-0f34-49ac-b9d6-f580b208dae2\") " Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.396065 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "526fa3fe-0f34-49ac-b9d6-f580b208dae2" (UID: "526fa3fe-0f34-49ac-b9d6-f580b208dae2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.396907 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/526fa3fe-0f34-49ac-b9d6-f580b208dae2-kube-api-access-mldjh" (OuterVolumeSpecName: "kube-api-access-mldjh") pod "526fa3fe-0f34-49ac-b9d6-f580b208dae2" (UID: "526fa3fe-0f34-49ac-b9d6-f580b208dae2"). InnerVolumeSpecName "kube-api-access-mldjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.423182 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "526fa3fe-0f34-49ac-b9d6-f580b208dae2" (UID: "526fa3fe-0f34-49ac-b9d6-f580b208dae2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.454261 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-config-data" (OuterVolumeSpecName: "config-data") pod "526fa3fe-0f34-49ac-b9d6-f580b208dae2" (UID: "526fa3fe-0f34-49ac-b9d6-f580b208dae2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.493627 4814 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.493684 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.493705 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mldjh\" (UniqueName: \"kubernetes.io/projected/526fa3fe-0f34-49ac-b9d6-f580b208dae2-kube-api-access-mldjh\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.493723 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526fa3fe-0f34-49ac-b9d6-f580b208dae2-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.807958 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" event={"ID":"526fa3fe-0f34-49ac-b9d6-f580b208dae2","Type":"ContainerDied","Data":"02a83a6138d741c214dd6f77db280d7e0d3b4198c32223244b7c484d68e4e174"} Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.808000 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02a83a6138d741c214dd6f77db280d7e0d3b4198c32223244b7c484d68e4e174" Dec 02 15:01:05 crc kubenswrapper[4814]: I1202 15:01:05.808009 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29411461-fpt5d" Dec 02 15:01:10 crc kubenswrapper[4814]: E1202 15:01:10.837353 4814 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.13:44480->38.102.83.13:39561: write tcp 38.102.83.13:44480->38.102.83.13:39561: write: broken pipe Dec 02 15:01:15 crc kubenswrapper[4814]: I1202 15:01:15.145374 4814 scope.go:117] "RemoveContainer" containerID="714b51e6383db7286d3aa6db07d911a84337db800a8889cf4fb860a435a19d22" Dec 02 15:01:15 crc kubenswrapper[4814]: E1202 15:01:15.146196 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 15:01:26 crc kubenswrapper[4814]: I1202 15:01:26.144776 4814 scope.go:117] "RemoveContainer" containerID="714b51e6383db7286d3aa6db07d911a84337db800a8889cf4fb860a435a19d22" Dec 02 15:01:26 crc kubenswrapper[4814]: E1202 15:01:26.145622 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 15:01:30 crc kubenswrapper[4814]: E1202 15:01:30.968692 4814 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.13:46650->38.102.83.13:39561: write tcp 38.102.83.13:46650->38.102.83.13:39561: write: broken pipe Dec 02 15:01:32 crc kubenswrapper[4814]: E1202 15:01:32.426003 4814 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.13:46668->38.102.83.13:39561: read tcp 38.102.83.13:46668->38.102.83.13:39561: read: connection reset by peer Dec 02 15:01:34 crc kubenswrapper[4814]: E1202 15:01:34.733996 4814 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.13:46730->38.102.83.13:39561: write tcp 38.102.83.13:46730->38.102.83.13:39561: write: broken pipe Dec 02 15:01:38 crc kubenswrapper[4814]: I1202 15:01:38.154166 4814 scope.go:117] "RemoveContainer" containerID="714b51e6383db7286d3aa6db07d911a84337db800a8889cf4fb860a435a19d22" Dec 02 15:01:38 crc kubenswrapper[4814]: E1202 15:01:38.154867 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(02f530f4-1f1c-4de6-8aef-0ac503f89387)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" Dec 02 15:01:39 crc kubenswrapper[4814]: I1202 15:01:39.886327 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vppx"] Dec 02 15:01:39 crc kubenswrapper[4814]: I1202 15:01:39.899759 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vppx"] Dec 02 15:01:39 crc kubenswrapper[4814]: I1202 15:01:39.929430 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherf774-account-delete-qnxzv"] Dec 02 15:01:39 crc kubenswrapper[4814]: E1202 15:01:39.929871 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526fa3fe-0f34-49ac-b9d6-f580b208dae2" containerName="keystone-cron" Dec 02 15:01:39 crc kubenswrapper[4814]: I1202 15:01:39.929891 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="526fa3fe-0f34-49ac-b9d6-f580b208dae2" containerName="keystone-cron" Dec 02 15:01:39 crc kubenswrapper[4814]: I1202 15:01:39.930118 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="526fa3fe-0f34-49ac-b9d6-f580b208dae2" containerName="keystone-cron" Dec 02 15:01:39 crc kubenswrapper[4814]: I1202 15:01:39.930819 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" Dec 02 15:01:39 crc kubenswrapper[4814]: I1202 15:01:39.936164 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherf774-account-delete-qnxzv"] Dec 02 15:01:39 crc kubenswrapper[4814]: I1202 15:01:39.962819 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.007221 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8mhx\" (UniqueName: \"kubernetes.io/projected/a643d2f3-34a0-42a1-a2bc-8d16e84579f5-kube-api-access-v8mhx\") pod \"watcherf774-account-delete-qnxzv\" (UID: \"a643d2f3-34a0-42a1-a2bc-8d16e84579f5\") " pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.007731 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a643d2f3-34a0-42a1-a2bc-8d16e84579f5-operator-scripts\") pod \"watcherf774-account-delete-qnxzv\" (UID: \"a643d2f3-34a0-42a1-a2bc-8d16e84579f5\") " pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.031209 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.033970 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="8d0918f3-3021-40ee-8300-84a3c6916e3b" containerName="watcher-applier" containerID="cri-o://6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77" gracePeriod=30 Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.096947 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.097265 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" containerName="watcher-kuttl-api-log" containerID="cri-o://0e71b8be9ffc3c072587d25e2828ce19c0737a6866f015a18f332fd496f5a05b" gracePeriod=30 Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.097334 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" containerName="watcher-api" containerID="cri-o://7160936f52c3dea03466a12c83f00fd0b2c320be504dda159a5a1e7fc033486d" gracePeriod=30 Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.109056 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8mhx\" (UniqueName: \"kubernetes.io/projected/a643d2f3-34a0-42a1-a2bc-8d16e84579f5-kube-api-access-v8mhx\") pod \"watcherf774-account-delete-qnxzv\" (UID: \"a643d2f3-34a0-42a1-a2bc-8d16e84579f5\") " pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.109204 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a643d2f3-34a0-42a1-a2bc-8d16e84579f5-operator-scripts\") pod \"watcherf774-account-delete-qnxzv\" (UID: \"a643d2f3-34a0-42a1-a2bc-8d16e84579f5\") " pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.110070 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a643d2f3-34a0-42a1-a2bc-8d16e84579f5-operator-scripts\") pod \"watcherf774-account-delete-qnxzv\" (UID: \"a643d2f3-34a0-42a1-a2bc-8d16e84579f5\") " pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.157319 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8mhx\" (UniqueName: \"kubernetes.io/projected/a643d2f3-34a0-42a1-a2bc-8d16e84579f5-kube-api-access-v8mhx\") pod \"watcherf774-account-delete-qnxzv\" (UID: \"a643d2f3-34a0-42a1-a2bc-8d16e84579f5\") " pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.177843 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a32376a2-39d6-4f3f-bf60-7d6ca2fcff62" path="/var/lib/kubelet/pods/a32376a2-39d6-4f3f-bf60-7d6ca2fcff62/volumes" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.248036 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.467857 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.634572 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-custom-prometheus-ca\") pod \"02f530f4-1f1c-4de6-8aef-0ac503f89387\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.634663 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-config-data\") pod \"02f530f4-1f1c-4de6-8aef-0ac503f89387\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.634691 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-combined-ca-bundle\") pod \"02f530f4-1f1c-4de6-8aef-0ac503f89387\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.634720 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ll8s\" (UniqueName: \"kubernetes.io/projected/02f530f4-1f1c-4de6-8aef-0ac503f89387-kube-api-access-7ll8s\") pod \"02f530f4-1f1c-4de6-8aef-0ac503f89387\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.634764 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f530f4-1f1c-4de6-8aef-0ac503f89387-logs\") pod \"02f530f4-1f1c-4de6-8aef-0ac503f89387\" (UID: \"02f530f4-1f1c-4de6-8aef-0ac503f89387\") " Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.635356 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f530f4-1f1c-4de6-8aef-0ac503f89387-logs" (OuterVolumeSpecName: "logs") pod "02f530f4-1f1c-4de6-8aef-0ac503f89387" (UID: "02f530f4-1f1c-4de6-8aef-0ac503f89387"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.655555 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f530f4-1f1c-4de6-8aef-0ac503f89387-kube-api-access-7ll8s" (OuterVolumeSpecName: "kube-api-access-7ll8s") pod "02f530f4-1f1c-4de6-8aef-0ac503f89387" (UID: "02f530f4-1f1c-4de6-8aef-0ac503f89387"). InnerVolumeSpecName "kube-api-access-7ll8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.676459 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "02f530f4-1f1c-4de6-8aef-0ac503f89387" (UID: "02f530f4-1f1c-4de6-8aef-0ac503f89387"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.704376 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02f530f4-1f1c-4de6-8aef-0ac503f89387" (UID: "02f530f4-1f1c-4de6-8aef-0ac503f89387"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.740308 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.740346 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.740363 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ll8s\" (UniqueName: \"kubernetes.io/projected/02f530f4-1f1c-4de6-8aef-0ac503f89387-kube-api-access-7ll8s\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.740377 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f530f4-1f1c-4de6-8aef-0ac503f89387-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.749479 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-config-data" (OuterVolumeSpecName: "config-data") pod "02f530f4-1f1c-4de6-8aef-0ac503f89387" (UID: "02f530f4-1f1c-4de6-8aef-0ac503f89387"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.842156 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f530f4-1f1c-4de6-8aef-0ac503f89387-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:40 crc kubenswrapper[4814]: I1202 15:01:40.858330 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherf774-account-delete-qnxzv"] Dec 02 15:01:40 crc kubenswrapper[4814]: E1202 15:01:40.888872 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 15:01:40 crc kubenswrapper[4814]: E1202 15:01:40.889993 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 15:01:40 crc kubenswrapper[4814]: E1202 15:01:40.891013 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 15:01:40 crc kubenswrapper[4814]: E1202 15:01:40.891046 4814 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="8d0918f3-3021-40ee-8300-84a3c6916e3b" containerName="watcher-applier" Dec 02 15:01:41 crc kubenswrapper[4814]: I1202 15:01:41.124771 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02f530f4-1f1c-4de6-8aef-0ac503f89387","Type":"ContainerDied","Data":"65723416a31c882153dc70876ccb9cab7a78d47125b4acab8a4cc8edb0958505"} Dec 02 15:01:41 crc kubenswrapper[4814]: I1202 15:01:41.124820 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:41 crc kubenswrapper[4814]: I1202 15:01:41.124838 4814 scope.go:117] "RemoveContainer" containerID="714b51e6383db7286d3aa6db07d911a84337db800a8889cf4fb860a435a19d22" Dec 02 15:01:41 crc kubenswrapper[4814]: I1202 15:01:41.126038 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" event={"ID":"a643d2f3-34a0-42a1-a2bc-8d16e84579f5","Type":"ContainerStarted","Data":"cc0286729897215dd1883bee7dc140ef947ab3fbc2ebaab9f9bef98b69481851"} Dec 02 15:01:41 crc kubenswrapper[4814]: I1202 15:01:41.126064 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" event={"ID":"a643d2f3-34a0-42a1-a2bc-8d16e84579f5","Type":"ContainerStarted","Data":"b8d2a55111525c5f24d4294bea6065ad1c55fc8f7322eae423bca958b3bb325d"} Dec 02 15:01:41 crc kubenswrapper[4814]: I1202 15:01:41.135572 4814 generic.go:334] "Generic (PLEG): container finished" podID="dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" containerID="0e71b8be9ffc3c072587d25e2828ce19c0737a6866f015a18f332fd496f5a05b" exitCode=143 Dec 02 15:01:41 crc kubenswrapper[4814]: I1202 15:01:41.135619 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1","Type":"ContainerDied","Data":"0e71b8be9ffc3c072587d25e2828ce19c0737a6866f015a18f332fd496f5a05b"} Dec 02 15:01:41 crc kubenswrapper[4814]: I1202 15:01:41.147630 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" podStartSLOduration=2.147596459 podStartE2EDuration="2.147596459s" podCreationTimestamp="2025-12-02 15:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:01:41.146731637 +0000 UTC m=+1963.271093841" watchObservedRunningTime="2025-12-02 15:01:41.147596459 +0000 UTC m=+1963.271958663" Dec 02 15:01:41 crc kubenswrapper[4814]: I1202 15:01:41.212682 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:01:41 crc kubenswrapper[4814]: I1202 15:01:41.232244 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:01:41 crc kubenswrapper[4814]: I1202 15:01:41.728947 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.153:9322/\": read tcp 10.217.0.2:54706->10.217.0.153:9322: read: connection reset by peer" Dec 02 15:01:41 crc kubenswrapper[4814]: I1202 15:01:41.729018 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"https://10.217.0.153:9322/\": read tcp 10.217.0.2:54718->10.217.0.153:9322: read: connection reset by peer" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.148749 4814 generic.go:334] "Generic (PLEG): container finished" podID="dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" containerID="7160936f52c3dea03466a12c83f00fd0b2c320be504dda159a5a1e7fc033486d" exitCode=0 Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.165088 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" path="/var/lib/kubelet/pods/02f530f4-1f1c-4de6-8aef-0ac503f89387/volumes" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.165768 4814 generic.go:334] "Generic (PLEG): container finished" podID="a643d2f3-34a0-42a1-a2bc-8d16e84579f5" containerID="cc0286729897215dd1883bee7dc140ef947ab3fbc2ebaab9f9bef98b69481851" exitCode=0 Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.166171 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1","Type":"ContainerDied","Data":"7160936f52c3dea03466a12c83f00fd0b2c320be504dda159a5a1e7fc033486d"} Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.166308 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1","Type":"ContainerDied","Data":"d6e5742bc9d18e0780c4e897ef205ad41d9bb56e63ff0eeaee318b9c93202a8a"} Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.166333 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6e5742bc9d18e0780c4e897ef205ad41d9bb56e63ff0eeaee318b9c93202a8a" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.166362 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" event={"ID":"a643d2f3-34a0-42a1-a2bc-8d16e84579f5","Type":"ContainerDied","Data":"cc0286729897215dd1883bee7dc140ef947ab3fbc2ebaab9f9bef98b69481851"} Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.193095 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.272030 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-config-data\") pod \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.272132 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-public-tls-certs\") pod \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.272262 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-logs\") pod \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.272298 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-custom-prometheus-ca\") pod \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.272320 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-combined-ca-bundle\") pod \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.272359 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnfk9\" (UniqueName: \"kubernetes.io/projected/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-kube-api-access-bnfk9\") pod \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.272436 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-internal-tls-certs\") pod \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\" (UID: \"dc71b7f9-59a6-4f13-b7b1-b503f192a8d1\") " Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.280629 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-logs" (OuterVolumeSpecName: "logs") pod "dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" (UID: "dc71b7f9-59a6-4f13-b7b1-b503f192a8d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.288476 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-kube-api-access-bnfk9" (OuterVolumeSpecName: "kube-api-access-bnfk9") pod "dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" (UID: "dc71b7f9-59a6-4f13-b7b1-b503f192a8d1"). InnerVolumeSpecName "kube-api-access-bnfk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.316356 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" (UID: "dc71b7f9-59a6-4f13-b7b1-b503f192a8d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.316498 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" (UID: "dc71b7f9-59a6-4f13-b7b1-b503f192a8d1"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.331407 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-config-data" (OuterVolumeSpecName: "config-data") pod "dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" (UID: "dc71b7f9-59a6-4f13-b7b1-b503f192a8d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.338340 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" (UID: "dc71b7f9-59a6-4f13-b7b1-b503f192a8d1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.338392 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" (UID: "dc71b7f9-59a6-4f13-b7b1-b503f192a8d1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.374398 4814 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.374433 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.374445 4814 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.374456 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.374467 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.374478 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:42 crc kubenswrapper[4814]: I1202 15:01:42.374488 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnfk9\" (UniqueName: \"kubernetes.io/projected/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1-kube-api-access-bnfk9\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:43 crc kubenswrapper[4814]: I1202 15:01:43.173000 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:43 crc kubenswrapper[4814]: I1202 15:01:43.207529 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:01:43 crc kubenswrapper[4814]: I1202 15:01:43.215555 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:01:43 crc kubenswrapper[4814]: I1202 15:01:43.518667 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" Dec 02 15:01:43 crc kubenswrapper[4814]: I1202 15:01:43.596913 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a643d2f3-34a0-42a1-a2bc-8d16e84579f5-operator-scripts\") pod \"a643d2f3-34a0-42a1-a2bc-8d16e84579f5\" (UID: \"a643d2f3-34a0-42a1-a2bc-8d16e84579f5\") " Dec 02 15:01:43 crc kubenswrapper[4814]: I1202 15:01:43.597014 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8mhx\" (UniqueName: \"kubernetes.io/projected/a643d2f3-34a0-42a1-a2bc-8d16e84579f5-kube-api-access-v8mhx\") pod \"a643d2f3-34a0-42a1-a2bc-8d16e84579f5\" (UID: \"a643d2f3-34a0-42a1-a2bc-8d16e84579f5\") " Dec 02 15:01:43 crc kubenswrapper[4814]: I1202 15:01:43.598147 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a643d2f3-34a0-42a1-a2bc-8d16e84579f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a643d2f3-34a0-42a1-a2bc-8d16e84579f5" (UID: "a643d2f3-34a0-42a1-a2bc-8d16e84579f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:01:43 crc kubenswrapper[4814]: I1202 15:01:43.606308 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a643d2f3-34a0-42a1-a2bc-8d16e84579f5-kube-api-access-v8mhx" (OuterVolumeSpecName: "kube-api-access-v8mhx") pod "a643d2f3-34a0-42a1-a2bc-8d16e84579f5" (UID: "a643d2f3-34a0-42a1-a2bc-8d16e84579f5"). InnerVolumeSpecName "kube-api-access-v8mhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:01:43 crc kubenswrapper[4814]: I1202 15:01:43.698910 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a643d2f3-34a0-42a1-a2bc-8d16e84579f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:43 crc kubenswrapper[4814]: I1202 15:01:43.698960 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8mhx\" (UniqueName: \"kubernetes.io/projected/a643d2f3-34a0-42a1-a2bc-8d16e84579f5-kube-api-access-v8mhx\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:44 crc kubenswrapper[4814]: I1202 15:01:44.155657 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" path="/var/lib/kubelet/pods/dc71b7f9-59a6-4f13-b7b1-b503f192a8d1/volumes" Dec 02 15:01:44 crc kubenswrapper[4814]: I1202 15:01:44.209583 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" event={"ID":"a643d2f3-34a0-42a1-a2bc-8d16e84579f5","Type":"ContainerDied","Data":"b8d2a55111525c5f24d4294bea6065ad1c55fc8f7322eae423bca958b3bb325d"} Dec 02 15:01:44 crc kubenswrapper[4814]: I1202 15:01:44.209638 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8d2a55111525c5f24d4294bea6065ad1c55fc8f7322eae423bca958b3bb325d" Dec 02 15:01:44 crc kubenswrapper[4814]: I1202 15:01:44.209745 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherf774-account-delete-qnxzv" Dec 02 15:01:44 crc kubenswrapper[4814]: E1202 15:01:44.243315 4814 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d0918f3_3021_40ee_8300_84a3c6916e3b.slice/crio-6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d0918f3_3021_40ee_8300_84a3c6916e3b.slice/crio-conmon-6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77.scope\": RecentStats: unable to find data in memory cache]" Dec 02 15:01:44 crc kubenswrapper[4814]: I1202 15:01:44.954136 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wwthl"] Dec 02 15:01:44 crc kubenswrapper[4814]: I1202 15:01:44.965823 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wwthl"] Dec 02 15:01:44 crc kubenswrapper[4814]: I1202 15:01:44.983139 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-f774-account-create-update-nmns6"] Dec 02 15:01:44 crc kubenswrapper[4814]: I1202 15:01:44.988389 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherf774-account-delete-qnxzv"] Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.001310 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-f774-account-create-update-nmns6"] Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.014897 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherf774-account-delete-qnxzv"] Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.034935 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.123105 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d0918f3-3021-40ee-8300-84a3c6916e3b-logs\") pod \"8d0918f3-3021-40ee-8300-84a3c6916e3b\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.123163 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0918f3-3021-40ee-8300-84a3c6916e3b-config-data\") pod \"8d0918f3-3021-40ee-8300-84a3c6916e3b\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.123193 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0918f3-3021-40ee-8300-84a3c6916e3b-combined-ca-bundle\") pod \"8d0918f3-3021-40ee-8300-84a3c6916e3b\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.123255 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpz5l\" (UniqueName: \"kubernetes.io/projected/8d0918f3-3021-40ee-8300-84a3c6916e3b-kube-api-access-bpz5l\") pod \"8d0918f3-3021-40ee-8300-84a3c6916e3b\" (UID: \"8d0918f3-3021-40ee-8300-84a3c6916e3b\") " Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.123540 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d0918f3-3021-40ee-8300-84a3c6916e3b-logs" (OuterVolumeSpecName: "logs") pod "8d0918f3-3021-40ee-8300-84a3c6916e3b" (UID: "8d0918f3-3021-40ee-8300-84a3c6916e3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.130105 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d0918f3-3021-40ee-8300-84a3c6916e3b-kube-api-access-bpz5l" (OuterVolumeSpecName: "kube-api-access-bpz5l") pod "8d0918f3-3021-40ee-8300-84a3c6916e3b" (UID: "8d0918f3-3021-40ee-8300-84a3c6916e3b"). InnerVolumeSpecName "kube-api-access-bpz5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.171659 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-zppnr"] Dec 02 15:01:45 crc kubenswrapper[4814]: E1202 15:01:45.172107 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d0918f3-3021-40ee-8300-84a3c6916e3b" containerName="watcher-applier" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172132 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d0918f3-3021-40ee-8300-84a3c6916e3b" containerName="watcher-applier" Dec 02 15:01:45 crc kubenswrapper[4814]: E1202 15:01:45.172148 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a643d2f3-34a0-42a1-a2bc-8d16e84579f5" containerName="mariadb-account-delete" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172156 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a643d2f3-34a0-42a1-a2bc-8d16e84579f5" containerName="mariadb-account-delete" Dec 02 15:01:45 crc kubenswrapper[4814]: E1202 15:01:45.172171 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172180 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: E1202 15:01:45.172189 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172197 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: E1202 15:01:45.172208 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172215 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: E1202 15:01:45.172311 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172323 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: E1202 15:01:45.172341 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" containerName="watcher-kuttl-api-log" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172352 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" containerName="watcher-kuttl-api-log" Dec 02 15:01:45 crc kubenswrapper[4814]: E1202 15:01:45.172364 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172372 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: E1202 15:01:45.172389 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172397 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: E1202 15:01:45.172417 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" containerName="watcher-api" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172425 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" containerName="watcher-api" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172632 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172647 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172666 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172680 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a643d2f3-34a0-42a1-a2bc-8d16e84579f5" containerName="mariadb-account-delete" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172691 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" containerName="watcher-api" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172700 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d0918f3-3021-40ee-8300-84a3c6916e3b" containerName="watcher-applier" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172721 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc71b7f9-59a6-4f13-b7b1-b503f192a8d1" containerName="watcher-kuttl-api-log" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.172734 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.173515 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-zppnr" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.177725 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d0918f3-3021-40ee-8300-84a3c6916e3b-config-data" (OuterVolumeSpecName: "config-data") pod "8d0918f3-3021-40ee-8300-84a3c6916e3b" (UID: "8d0918f3-3021-40ee-8300-84a3c6916e3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.184084 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-ec58-account-create-update-kwgds"] Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.184752 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.184778 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f530f4-1f1c-4de6-8aef-0ac503f89387" containerName="watcher-decision-engine" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.185537 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ec58-account-create-update-kwgds" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.188182 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.198279 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d0918f3-3021-40ee-8300-84a3c6916e3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d0918f3-3021-40ee-8300-84a3c6916e3b" (UID: "8d0918f3-3021-40ee-8300-84a3c6916e3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.212040 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-zppnr"] Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.217390 4814 generic.go:334] "Generic (PLEG): container finished" podID="8d0918f3-3021-40ee-8300-84a3c6916e3b" containerID="6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77" exitCode=0 Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.217420 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"8d0918f3-3021-40ee-8300-84a3c6916e3b","Type":"ContainerDied","Data":"6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77"} Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.217443 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"8d0918f3-3021-40ee-8300-84a3c6916e3b","Type":"ContainerDied","Data":"780f2732250ad144a02c658ebc658518c08298fc0a2930fac98c1878296b2ab6"} Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.217458 4814 scope.go:117] "RemoveContainer" containerID="6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.217554 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.224653 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d0918f3-3021-40ee-8300-84a3c6916e3b-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.224686 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0918f3-3021-40ee-8300-84a3c6916e3b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.224700 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0918f3-3021-40ee-8300-84a3c6916e3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.224788 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpz5l\" (UniqueName: \"kubernetes.io/projected/8d0918f3-3021-40ee-8300-84a3c6916e3b-kube-api-access-bpz5l\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.235976 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-ec58-account-create-update-kwgds"] Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.278084 4814 scope.go:117] "RemoveContainer" containerID="6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77" Dec 02 15:01:45 crc kubenswrapper[4814]: E1202 15:01:45.281205 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77\": container with ID starting with 6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77 not found: ID does not exist" containerID="6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.281281 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77"} err="failed to get container status \"6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77\": rpc error: code = NotFound desc = could not find container \"6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77\": container with ID starting with 6d0cae3bf09937eef4c8e9ee93d8efa8d0ee89dcafffdf6ee5e49e9036f93c77 not found: ID does not exist" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.283404 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.290780 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.326875 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddlv4\" (UniqueName: \"kubernetes.io/projected/b81079bf-94b8-4866-b55d-9fe3a65fcee3-kube-api-access-ddlv4\") pod \"watcher-db-create-zppnr\" (UID: \"b81079bf-94b8-4866-b55d-9fe3a65fcee3\") " pod="watcher-kuttl-default/watcher-db-create-zppnr" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.327006 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81079bf-94b8-4866-b55d-9fe3a65fcee3-operator-scripts\") pod \"watcher-db-create-zppnr\" (UID: \"b81079bf-94b8-4866-b55d-9fe3a65fcee3\") " pod="watcher-kuttl-default/watcher-db-create-zppnr" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.327919 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjtnx\" (UniqueName: \"kubernetes.io/projected/eae13af5-1d8e-4289-85d8-8032e9e89bb5-kube-api-access-hjtnx\") pod \"watcher-ec58-account-create-update-kwgds\" (UID: \"eae13af5-1d8e-4289-85d8-8032e9e89bb5\") " pod="watcher-kuttl-default/watcher-ec58-account-create-update-kwgds" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.328034 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eae13af5-1d8e-4289-85d8-8032e9e89bb5-operator-scripts\") pod \"watcher-ec58-account-create-update-kwgds\" (UID: \"eae13af5-1d8e-4289-85d8-8032e9e89bb5\") " pod="watcher-kuttl-default/watcher-ec58-account-create-update-kwgds" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.429962 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eae13af5-1d8e-4289-85d8-8032e9e89bb5-operator-scripts\") pod \"watcher-ec58-account-create-update-kwgds\" (UID: \"eae13af5-1d8e-4289-85d8-8032e9e89bb5\") " pod="watcher-kuttl-default/watcher-ec58-account-create-update-kwgds" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.430042 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddlv4\" (UniqueName: \"kubernetes.io/projected/b81079bf-94b8-4866-b55d-9fe3a65fcee3-kube-api-access-ddlv4\") pod \"watcher-db-create-zppnr\" (UID: \"b81079bf-94b8-4866-b55d-9fe3a65fcee3\") " pod="watcher-kuttl-default/watcher-db-create-zppnr" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.430102 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81079bf-94b8-4866-b55d-9fe3a65fcee3-operator-scripts\") pod \"watcher-db-create-zppnr\" (UID: \"b81079bf-94b8-4866-b55d-9fe3a65fcee3\") " pod="watcher-kuttl-default/watcher-db-create-zppnr" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.430120 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjtnx\" (UniqueName: \"kubernetes.io/projected/eae13af5-1d8e-4289-85d8-8032e9e89bb5-kube-api-access-hjtnx\") pod \"watcher-ec58-account-create-update-kwgds\" (UID: \"eae13af5-1d8e-4289-85d8-8032e9e89bb5\") " pod="watcher-kuttl-default/watcher-ec58-account-create-update-kwgds" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.430749 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eae13af5-1d8e-4289-85d8-8032e9e89bb5-operator-scripts\") pod \"watcher-ec58-account-create-update-kwgds\" (UID: \"eae13af5-1d8e-4289-85d8-8032e9e89bb5\") " pod="watcher-kuttl-default/watcher-ec58-account-create-update-kwgds" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.430807 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81079bf-94b8-4866-b55d-9fe3a65fcee3-operator-scripts\") pod \"watcher-db-create-zppnr\" (UID: \"b81079bf-94b8-4866-b55d-9fe3a65fcee3\") " pod="watcher-kuttl-default/watcher-db-create-zppnr" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.446949 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjtnx\" (UniqueName: \"kubernetes.io/projected/eae13af5-1d8e-4289-85d8-8032e9e89bb5-kube-api-access-hjtnx\") pod \"watcher-ec58-account-create-update-kwgds\" (UID: \"eae13af5-1d8e-4289-85d8-8032e9e89bb5\") " pod="watcher-kuttl-default/watcher-ec58-account-create-update-kwgds" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.448684 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddlv4\" (UniqueName: \"kubernetes.io/projected/b81079bf-94b8-4866-b55d-9fe3a65fcee3-kube-api-access-ddlv4\") pod \"watcher-db-create-zppnr\" (UID: \"b81079bf-94b8-4866-b55d-9fe3a65fcee3\") " pod="watcher-kuttl-default/watcher-db-create-zppnr" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.572205 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-zppnr" Dec 02 15:01:45 crc kubenswrapper[4814]: I1202 15:01:45.575452 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ec58-account-create-update-kwgds" Dec 02 15:01:46 crc kubenswrapper[4814]: I1202 15:01:46.076553 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-zppnr"] Dec 02 15:01:46 crc kubenswrapper[4814]: W1202 15:01:46.081332 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb81079bf_94b8_4866_b55d_9fe3a65fcee3.slice/crio-a184cd1f5047bc8f38d195690b004f65b3060203a5376de3eef77e30646e17c0 WatchSource:0}: Error finding container a184cd1f5047bc8f38d195690b004f65b3060203a5376de3eef77e30646e17c0: Status 404 returned error can't find the container with id a184cd1f5047bc8f38d195690b004f65b3060203a5376de3eef77e30646e17c0 Dec 02 15:01:46 crc kubenswrapper[4814]: W1202 15:01:46.153281 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeae13af5_1d8e_4289_85d8_8032e9e89bb5.slice/crio-b9da75f65e2c828b0109f014fb6b6c2ba47456787b9897a429ddcc4858e8cd8c WatchSource:0}: Error finding container b9da75f65e2c828b0109f014fb6b6c2ba47456787b9897a429ddcc4858e8cd8c: Status 404 returned error can't find the container with id b9da75f65e2c828b0109f014fb6b6c2ba47456787b9897a429ddcc4858e8cd8c Dec 02 15:01:46 crc kubenswrapper[4814]: I1202 15:01:46.156569 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8792bd78-30e1-463d-bbb1-3c330c5500ce" path="/var/lib/kubelet/pods/8792bd78-30e1-463d-bbb1-3c330c5500ce/volumes" Dec 02 15:01:46 crc kubenswrapper[4814]: I1202 15:01:46.158341 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d0918f3-3021-40ee-8300-84a3c6916e3b" path="/var/lib/kubelet/pods/8d0918f3-3021-40ee-8300-84a3c6916e3b/volumes" Dec 02 15:01:46 crc kubenswrapper[4814]: I1202 15:01:46.158981 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a643d2f3-34a0-42a1-a2bc-8d16e84579f5" path="/var/lib/kubelet/pods/a643d2f3-34a0-42a1-a2bc-8d16e84579f5/volumes" Dec 02 15:01:46 crc kubenswrapper[4814]: I1202 15:01:46.160441 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd623c5f-e23a-4914-a707-d4c67b3caccd" path="/var/lib/kubelet/pods/cd623c5f-e23a-4914-a707-d4c67b3caccd/volumes" Dec 02 15:01:46 crc kubenswrapper[4814]: I1202 15:01:46.160893 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-ec58-account-create-update-kwgds"] Dec 02 15:01:46 crc kubenswrapper[4814]: I1202 15:01:46.225846 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ec58-account-create-update-kwgds" event={"ID":"eae13af5-1d8e-4289-85d8-8032e9e89bb5","Type":"ContainerStarted","Data":"b9da75f65e2c828b0109f014fb6b6c2ba47456787b9897a429ddcc4858e8cd8c"} Dec 02 15:01:46 crc kubenswrapper[4814]: I1202 15:01:46.227320 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-zppnr" event={"ID":"b81079bf-94b8-4866-b55d-9fe3a65fcee3","Type":"ContainerStarted","Data":"a184cd1f5047bc8f38d195690b004f65b3060203a5376de3eef77e30646e17c0"} Dec 02 15:01:47 crc kubenswrapper[4814]: I1202 15:01:47.238075 4814 generic.go:334] "Generic (PLEG): container finished" podID="b81079bf-94b8-4866-b55d-9fe3a65fcee3" containerID="70f6682c2424d71ef75c692f245537db158f637639108dd494ed2c5ef6aa6f3a" exitCode=0 Dec 02 15:01:47 crc kubenswrapper[4814]: I1202 15:01:47.238131 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-zppnr" event={"ID":"b81079bf-94b8-4866-b55d-9fe3a65fcee3","Type":"ContainerDied","Data":"70f6682c2424d71ef75c692f245537db158f637639108dd494ed2c5ef6aa6f3a"} Dec 02 15:01:47 crc kubenswrapper[4814]: I1202 15:01:47.240418 4814 generic.go:334] "Generic (PLEG): container finished" podID="eae13af5-1d8e-4289-85d8-8032e9e89bb5" containerID="4389b45cbc6bf49a651ff8f1f7aa8d53515c436ec6aa7fe3b6b4dd051751ad25" exitCode=0 Dec 02 15:01:47 crc kubenswrapper[4814]: I1202 15:01:47.240454 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ec58-account-create-update-kwgds" event={"ID":"eae13af5-1d8e-4289-85d8-8032e9e89bb5","Type":"ContainerDied","Data":"4389b45cbc6bf49a651ff8f1f7aa8d53515c436ec6aa7fe3b6b4dd051751ad25"} Dec 02 15:01:48 crc kubenswrapper[4814]: I1202 15:01:48.687532 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-zppnr" Dec 02 15:01:48 crc kubenswrapper[4814]: I1202 15:01:48.694514 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ec58-account-create-update-kwgds" Dec 02 15:01:48 crc kubenswrapper[4814]: I1202 15:01:48.794715 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eae13af5-1d8e-4289-85d8-8032e9e89bb5-operator-scripts\") pod \"eae13af5-1d8e-4289-85d8-8032e9e89bb5\" (UID: \"eae13af5-1d8e-4289-85d8-8032e9e89bb5\") " Dec 02 15:01:48 crc kubenswrapper[4814]: I1202 15:01:48.794764 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81079bf-94b8-4866-b55d-9fe3a65fcee3-operator-scripts\") pod \"b81079bf-94b8-4866-b55d-9fe3a65fcee3\" (UID: \"b81079bf-94b8-4866-b55d-9fe3a65fcee3\") " Dec 02 15:01:48 crc kubenswrapper[4814]: I1202 15:01:48.794870 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlv4\" (UniqueName: \"kubernetes.io/projected/b81079bf-94b8-4866-b55d-9fe3a65fcee3-kube-api-access-ddlv4\") pod \"b81079bf-94b8-4866-b55d-9fe3a65fcee3\" (UID: \"b81079bf-94b8-4866-b55d-9fe3a65fcee3\") " Dec 02 15:01:48 crc kubenswrapper[4814]: I1202 15:01:48.794917 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjtnx\" (UniqueName: \"kubernetes.io/projected/eae13af5-1d8e-4289-85d8-8032e9e89bb5-kube-api-access-hjtnx\") pod \"eae13af5-1d8e-4289-85d8-8032e9e89bb5\" (UID: \"eae13af5-1d8e-4289-85d8-8032e9e89bb5\") " Dec 02 15:01:48 crc kubenswrapper[4814]: I1202 15:01:48.795594 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b81079bf-94b8-4866-b55d-9fe3a65fcee3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b81079bf-94b8-4866-b55d-9fe3a65fcee3" (UID: "b81079bf-94b8-4866-b55d-9fe3a65fcee3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:01:48 crc kubenswrapper[4814]: I1202 15:01:48.795922 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eae13af5-1d8e-4289-85d8-8032e9e89bb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eae13af5-1d8e-4289-85d8-8032e9e89bb5" (UID: "eae13af5-1d8e-4289-85d8-8032e9e89bb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:01:48 crc kubenswrapper[4814]: I1202 15:01:48.802001 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eae13af5-1d8e-4289-85d8-8032e9e89bb5-kube-api-access-hjtnx" (OuterVolumeSpecName: "kube-api-access-hjtnx") pod "eae13af5-1d8e-4289-85d8-8032e9e89bb5" (UID: "eae13af5-1d8e-4289-85d8-8032e9e89bb5"). InnerVolumeSpecName "kube-api-access-hjtnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:01:48 crc kubenswrapper[4814]: I1202 15:01:48.803398 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b81079bf-94b8-4866-b55d-9fe3a65fcee3-kube-api-access-ddlv4" (OuterVolumeSpecName: "kube-api-access-ddlv4") pod "b81079bf-94b8-4866-b55d-9fe3a65fcee3" (UID: "b81079bf-94b8-4866-b55d-9fe3a65fcee3"). InnerVolumeSpecName "kube-api-access-ddlv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:01:48 crc kubenswrapper[4814]: I1202 15:01:48.896725 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddlv4\" (UniqueName: \"kubernetes.io/projected/b81079bf-94b8-4866-b55d-9fe3a65fcee3-kube-api-access-ddlv4\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:48 crc kubenswrapper[4814]: I1202 15:01:48.896761 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjtnx\" (UniqueName: \"kubernetes.io/projected/eae13af5-1d8e-4289-85d8-8032e9e89bb5-kube-api-access-hjtnx\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:48 crc kubenswrapper[4814]: I1202 15:01:48.896771 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eae13af5-1d8e-4289-85d8-8032e9e89bb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:48 crc kubenswrapper[4814]: I1202 15:01:48.896779 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81079bf-94b8-4866-b55d-9fe3a65fcee3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:49 crc kubenswrapper[4814]: I1202 15:01:49.259892 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ec58-account-create-update-kwgds" event={"ID":"eae13af5-1d8e-4289-85d8-8032e9e89bb5","Type":"ContainerDied","Data":"b9da75f65e2c828b0109f014fb6b6c2ba47456787b9897a429ddcc4858e8cd8c"} Dec 02 15:01:49 crc kubenswrapper[4814]: I1202 15:01:49.259959 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9da75f65e2c828b0109f014fb6b6c2ba47456787b9897a429ddcc4858e8cd8c" Dec 02 15:01:49 crc kubenswrapper[4814]: I1202 15:01:49.260003 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ec58-account-create-update-kwgds" Dec 02 15:01:49 crc kubenswrapper[4814]: I1202 15:01:49.267105 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-zppnr" event={"ID":"b81079bf-94b8-4866-b55d-9fe3a65fcee3","Type":"ContainerDied","Data":"a184cd1f5047bc8f38d195690b004f65b3060203a5376de3eef77e30646e17c0"} Dec 02 15:01:49 crc kubenswrapper[4814]: I1202 15:01:49.267147 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a184cd1f5047bc8f38d195690b004f65b3060203a5376de3eef77e30646e17c0" Dec 02 15:01:49 crc kubenswrapper[4814]: I1202 15:01:49.267180 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-zppnr" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.523460 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tb76p"] Dec 02 15:01:50 crc kubenswrapper[4814]: E1202 15:01:50.524068 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae13af5-1d8e-4289-85d8-8032e9e89bb5" containerName="mariadb-account-create-update" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.524081 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae13af5-1d8e-4289-85d8-8032e9e89bb5" containerName="mariadb-account-create-update" Dec 02 15:01:50 crc kubenswrapper[4814]: E1202 15:01:50.524099 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81079bf-94b8-4866-b55d-9fe3a65fcee3" containerName="mariadb-database-create" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.524105 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81079bf-94b8-4866-b55d-9fe3a65fcee3" containerName="mariadb-database-create" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.524261 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae13af5-1d8e-4289-85d8-8032e9e89bb5" containerName="mariadb-account-create-update" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.524286 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81079bf-94b8-4866-b55d-9fe3a65fcee3" containerName="mariadb-database-create" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.524810 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.535083 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.538429 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-gc6nq" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.548303 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tb76p"] Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.626889 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brhzn\" (UniqueName: \"kubernetes.io/projected/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-kube-api-access-brhzn\") pod \"watcher-kuttl-db-sync-tb76p\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.626926 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-db-sync-config-data\") pod \"watcher-kuttl-db-sync-tb76p\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.626949 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-tb76p\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.627170 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-config-data\") pod \"watcher-kuttl-db-sync-tb76p\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.728895 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-config-data\") pod \"watcher-kuttl-db-sync-tb76p\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.728990 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brhzn\" (UniqueName: \"kubernetes.io/projected/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-kube-api-access-brhzn\") pod \"watcher-kuttl-db-sync-tb76p\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.729011 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-db-sync-config-data\") pod \"watcher-kuttl-db-sync-tb76p\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.729032 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-tb76p\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.733390 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-db-sync-config-data\") pod \"watcher-kuttl-db-sync-tb76p\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.741076 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-tb76p\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.744032 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-config-data\") pod \"watcher-kuttl-db-sync-tb76p\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.755756 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brhzn\" (UniqueName: \"kubernetes.io/projected/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-kube-api-access-brhzn\") pod \"watcher-kuttl-db-sync-tb76p\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:50 crc kubenswrapper[4814]: I1202 15:01:50.840209 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:51 crc kubenswrapper[4814]: I1202 15:01:51.310362 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tb76p"] Dec 02 15:01:52 crc kubenswrapper[4814]: I1202 15:01:52.312328 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" event={"ID":"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816","Type":"ContainerStarted","Data":"dae2fd8167ef76290c1bcb9573f25245d53d2e72cc9e003d7bd2c4368eeee32f"} Dec 02 15:01:52 crc kubenswrapper[4814]: I1202 15:01:52.312629 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" event={"ID":"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816","Type":"ContainerStarted","Data":"a5e07a6f2bd0c36051fad7d96679c1acc0099be8c5559712c05a1519846e4e52"} Dec 02 15:01:52 crc kubenswrapper[4814]: I1202 15:01:52.332818 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" podStartSLOduration=2.332791432 podStartE2EDuration="2.332791432s" podCreationTimestamp="2025-12-02 15:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:01:52.328329981 +0000 UTC m=+1974.452692195" watchObservedRunningTime="2025-12-02 15:01:52.332791432 +0000 UTC m=+1974.457153646" Dec 02 15:01:54 crc kubenswrapper[4814]: I1202 15:01:54.349196 4814 generic.go:334] "Generic (PLEG): container finished" podID="9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816" containerID="dae2fd8167ef76290c1bcb9573f25245d53d2e72cc9e003d7bd2c4368eeee32f" exitCode=0 Dec 02 15:01:54 crc kubenswrapper[4814]: I1202 15:01:54.349257 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" event={"ID":"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816","Type":"ContainerDied","Data":"dae2fd8167ef76290c1bcb9573f25245d53d2e72cc9e003d7bd2c4368eeee32f"} Dec 02 15:01:55 crc kubenswrapper[4814]: I1202 15:01:55.759509 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:55 crc kubenswrapper[4814]: I1202 15:01:55.911647 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-combined-ca-bundle\") pod \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " Dec 02 15:01:55 crc kubenswrapper[4814]: I1202 15:01:55.911777 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-db-sync-config-data\") pod \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " Dec 02 15:01:55 crc kubenswrapper[4814]: I1202 15:01:55.911820 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brhzn\" (UniqueName: \"kubernetes.io/projected/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-kube-api-access-brhzn\") pod \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " Dec 02 15:01:55 crc kubenswrapper[4814]: I1202 15:01:55.911979 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-config-data\") pod \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\" (UID: \"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816\") " Dec 02 15:01:55 crc kubenswrapper[4814]: I1202 15:01:55.919507 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816" (UID: "9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:55 crc kubenswrapper[4814]: I1202 15:01:55.921742 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-kube-api-access-brhzn" (OuterVolumeSpecName: "kube-api-access-brhzn") pod "9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816" (UID: "9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816"). InnerVolumeSpecName "kube-api-access-brhzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:01:55 crc kubenswrapper[4814]: I1202 15:01:55.951356 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816" (UID: "9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:55 crc kubenswrapper[4814]: I1202 15:01:55.968669 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-config-data" (OuterVolumeSpecName: "config-data") pod "9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816" (UID: "9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.014571 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.014620 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.014641 4814 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.014658 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brhzn\" (UniqueName: \"kubernetes.io/projected/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816-kube-api-access-brhzn\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.385312 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" event={"ID":"9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816","Type":"ContainerDied","Data":"a5e07a6f2bd0c36051fad7d96679c1acc0099be8c5559712c05a1519846e4e52"} Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.385356 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5e07a6f2bd0c36051fad7d96679c1acc0099be8c5559712c05a1519846e4e52" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.385754 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tb76p" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.666778 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:01:56 crc kubenswrapper[4814]: E1202 15:01:56.667182 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816" containerName="watcher-kuttl-db-sync" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.667208 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816" containerName="watcher-kuttl-db-sync" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.667454 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816" containerName="watcher-kuttl-db-sync" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.668639 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.674782 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.675124 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-gc6nq" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.675312 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.675476 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.678319 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.727379 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482b478e-e7a1-4448-8da2-f1160fdb3e1a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.727624 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.727770 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq24g\" (UniqueName: \"kubernetes.io/projected/482b478e-e7a1-4448-8da2-f1160fdb3e1a-kube-api-access-hq24g\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.727857 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.728129 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.728244 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.728330 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.743959 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.745119 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.749658 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.764823 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.787448 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.788599 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.790949 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.825421 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.829356 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482b478e-e7a1-4448-8da2-f1160fdb3e1a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.829406 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.829433 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c153b697-66c1-48ec-ba60-556ca7b6c7cb-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.829479 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c153b697-66c1-48ec-ba60-556ca7b6c7cb-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.829503 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq24g\" (UniqueName: \"kubernetes.io/projected/482b478e-e7a1-4448-8da2-f1160fdb3e1a-kube-api-access-hq24g\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.829525 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c153b697-66c1-48ec-ba60-556ca7b6c7cb-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.829546 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.829566 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx87q\" (UniqueName: \"kubernetes.io/projected/c153b697-66c1-48ec-ba60-556ca7b6c7cb-kube-api-access-xx87q\") pod \"watcher-kuttl-applier-0\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.829584 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.829608 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.829627 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.830815 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482b478e-e7a1-4448-8da2-f1160fdb3e1a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.833445 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.833481 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.836801 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.842912 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.845000 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.873670 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq24g\" (UniqueName: \"kubernetes.io/projected/482b478e-e7a1-4448-8da2-f1160fdb3e1a-kube-api-access-hq24g\") pod \"watcher-kuttl-api-0\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.931188 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.931269 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c153b697-66c1-48ec-ba60-556ca7b6c7cb-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.931293 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.931330 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c153b697-66c1-48ec-ba60-556ca7b6c7cb-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.931353 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c153b697-66c1-48ec-ba60-556ca7b6c7cb-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.931373 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.931396 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24c70328-5be3-476b-8788-fcd8ba8bee33-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.931414 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx87q\" (UniqueName: \"kubernetes.io/projected/c153b697-66c1-48ec-ba60-556ca7b6c7cb-kube-api-access-xx87q\") pod \"watcher-kuttl-applier-0\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.931455 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8pwz\" (UniqueName: \"kubernetes.io/projected/24c70328-5be3-476b-8788-fcd8ba8bee33-kube-api-access-q8pwz\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.932486 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c153b697-66c1-48ec-ba60-556ca7b6c7cb-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.935536 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c153b697-66c1-48ec-ba60-556ca7b6c7cb-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.935858 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c153b697-66c1-48ec-ba60-556ca7b6c7cb-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:56 crc kubenswrapper[4814]: I1202 15:01:56.959348 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx87q\" (UniqueName: \"kubernetes.io/projected/c153b697-66c1-48ec-ba60-556ca7b6c7cb-kube-api-access-xx87q\") pod \"watcher-kuttl-applier-0\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.020619 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.036001 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.036070 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24c70328-5be3-476b-8788-fcd8ba8bee33-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.036114 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8pwz\" (UniqueName: \"kubernetes.io/projected/24c70328-5be3-476b-8788-fcd8ba8bee33-kube-api-access-q8pwz\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.036144 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.036191 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.037265 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24c70328-5be3-476b-8788-fcd8ba8bee33-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.042840 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.047912 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.052903 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.065533 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.088830 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8pwz\" (UniqueName: \"kubernetes.io/projected/24c70328-5be3-476b-8788-fcd8ba8bee33-kube-api-access-q8pwz\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.118053 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.529572 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.655913 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:01:57 crc kubenswrapper[4814]: W1202 15:01:57.658786 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc153b697_66c1_48ec_ba60_556ca7b6c7cb.slice/crio-461af1e184a4999997752b13a17f1cd4f5055724de633033fe6aa441e7270fcf WatchSource:0}: Error finding container 461af1e184a4999997752b13a17f1cd4f5055724de633033fe6aa441e7270fcf: Status 404 returned error can't find the container with id 461af1e184a4999997752b13a17f1cd4f5055724de633033fe6aa441e7270fcf Dec 02 15:01:57 crc kubenswrapper[4814]: I1202 15:01:57.719585 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:01:58 crc kubenswrapper[4814]: I1202 15:01:58.411948 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"c153b697-66c1-48ec-ba60-556ca7b6c7cb","Type":"ContainerStarted","Data":"36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71"} Dec 02 15:01:58 crc kubenswrapper[4814]: I1202 15:01:58.412206 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"c153b697-66c1-48ec-ba60-556ca7b6c7cb","Type":"ContainerStarted","Data":"461af1e184a4999997752b13a17f1cd4f5055724de633033fe6aa441e7270fcf"} Dec 02 15:01:58 crc kubenswrapper[4814]: I1202 15:01:58.414699 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"482b478e-e7a1-4448-8da2-f1160fdb3e1a","Type":"ContainerStarted","Data":"91508f1a89b7a65e06b0e237702783c6a1a8a223ea8a5c818feec93a99c34ce2"} Dec 02 15:01:58 crc kubenswrapper[4814]: I1202 15:01:58.414765 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"482b478e-e7a1-4448-8da2-f1160fdb3e1a","Type":"ContainerStarted","Data":"85e7db71c9a55a9f94a8031361d87942994d64fd6df0d41c35f4e78c0aed93cc"} Dec 02 15:01:58 crc kubenswrapper[4814]: I1202 15:01:58.414788 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"482b478e-e7a1-4448-8da2-f1160fdb3e1a","Type":"ContainerStarted","Data":"94b32c1b75a35bf21a0ce62b643a7f69914bf763063c13a6da0e240a2e9a73c5"} Dec 02 15:01:58 crc kubenswrapper[4814]: I1202 15:01:58.415278 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:01:58 crc kubenswrapper[4814]: I1202 15:01:58.418524 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"24c70328-5be3-476b-8788-fcd8ba8bee33","Type":"ContainerStarted","Data":"dc3d70294dcc0edfe532659bf2f37b6d1ec5e05aba888d6ce136bed2f7f1ebbc"} Dec 02 15:01:58 crc kubenswrapper[4814]: I1202 15:01:58.418583 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"24c70328-5be3-476b-8788-fcd8ba8bee33","Type":"ContainerStarted","Data":"89b6f4b8ca75070e9b3b59c758c01b1fb0481dc3a0ee603c7e24aedb295a0cd1"} Dec 02 15:01:58 crc kubenswrapper[4814]: I1202 15:01:58.446367 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.446224453 podStartE2EDuration="2.446224453s" podCreationTimestamp="2025-12-02 15:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:01:58.438346757 +0000 UTC m=+1980.562709011" watchObservedRunningTime="2025-12-02 15:01:58.446224453 +0000 UTC m=+1980.570586677" Dec 02 15:01:58 crc kubenswrapper[4814]: I1202 15:01:58.466898 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.466877406 podStartE2EDuration="2.466877406s" podCreationTimestamp="2025-12-02 15:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:01:58.460542149 +0000 UTC m=+1980.584904373" watchObservedRunningTime="2025-12-02 15:01:58.466877406 +0000 UTC m=+1980.591239630" Dec 02 15:01:58 crc kubenswrapper[4814]: I1202 15:01:58.478350 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.478330281 podStartE2EDuration="2.478330281s" podCreationTimestamp="2025-12-02 15:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:01:58.477001498 +0000 UTC m=+1980.601363702" watchObservedRunningTime="2025-12-02 15:01:58.478330281 +0000 UTC m=+1980.602692485" Dec 02 15:02:00 crc kubenswrapper[4814]: I1202 15:02:00.575565 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:02:01 crc kubenswrapper[4814]: I1202 15:02:01.446041 4814 generic.go:334] "Generic (PLEG): container finished" podID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerID="dc3d70294dcc0edfe532659bf2f37b6d1ec5e05aba888d6ce136bed2f7f1ebbc" exitCode=1 Dec 02 15:02:01 crc kubenswrapper[4814]: I1202 15:02:01.446166 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"24c70328-5be3-476b-8788-fcd8ba8bee33","Type":"ContainerDied","Data":"dc3d70294dcc0edfe532659bf2f37b6d1ec5e05aba888d6ce136bed2f7f1ebbc"} Dec 02 15:02:01 crc kubenswrapper[4814]: I1202 15:02:01.447054 4814 scope.go:117] "RemoveContainer" containerID="dc3d70294dcc0edfe532659bf2f37b6d1ec5e05aba888d6ce136bed2f7f1ebbc" Dec 02 15:02:02 crc kubenswrapper[4814]: I1202 15:02:02.021478 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:02:02 crc kubenswrapper[4814]: I1202 15:02:02.066182 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:02:02 crc kubenswrapper[4814]: I1202 15:02:02.457680 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"24c70328-5be3-476b-8788-fcd8ba8bee33","Type":"ContainerStarted","Data":"a1b66f59edb2cb6d94773c5bbc84d8f6ba48116c66a23a839afdb0f8dfb6c93a"} Dec 02 15:02:05 crc kubenswrapper[4814]: I1202 15:02:05.486564 4814 generic.go:334] "Generic (PLEG): container finished" podID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerID="a1b66f59edb2cb6d94773c5bbc84d8f6ba48116c66a23a839afdb0f8dfb6c93a" exitCode=1 Dec 02 15:02:05 crc kubenswrapper[4814]: I1202 15:02:05.486662 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"24c70328-5be3-476b-8788-fcd8ba8bee33","Type":"ContainerDied","Data":"a1b66f59edb2cb6d94773c5bbc84d8f6ba48116c66a23a839afdb0f8dfb6c93a"} Dec 02 15:02:05 crc kubenswrapper[4814]: I1202 15:02:05.487912 4814 scope.go:117] "RemoveContainer" containerID="dc3d70294dcc0edfe532659bf2f37b6d1ec5e05aba888d6ce136bed2f7f1ebbc" Dec 02 15:02:05 crc kubenswrapper[4814]: I1202 15:02:05.488647 4814 scope.go:117] "RemoveContainer" containerID="a1b66f59edb2cb6d94773c5bbc84d8f6ba48116c66a23a839afdb0f8dfb6c93a" Dec 02 15:02:05 crc kubenswrapper[4814]: E1202 15:02:05.489075 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:02:07 crc kubenswrapper[4814]: I1202 15:02:07.021945 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:02:07 crc kubenswrapper[4814]: I1202 15:02:07.033064 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:02:07 crc kubenswrapper[4814]: I1202 15:02:07.066485 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:02:07 crc kubenswrapper[4814]: I1202 15:02:07.090608 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:02:07 crc kubenswrapper[4814]: I1202 15:02:07.118876 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:02:07 crc kubenswrapper[4814]: I1202 15:02:07.118953 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:02:07 crc kubenswrapper[4814]: I1202 15:02:07.120021 4814 scope.go:117] "RemoveContainer" containerID="a1b66f59edb2cb6d94773c5bbc84d8f6ba48116c66a23a839afdb0f8dfb6c93a" Dec 02 15:02:07 crc kubenswrapper[4814]: E1202 15:02:07.120324 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:02:07 crc kubenswrapper[4814]: I1202 15:02:07.530688 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:02:07 crc kubenswrapper[4814]: I1202 15:02:07.546976 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:02:20 crc kubenswrapper[4814]: I1202 15:02:20.144893 4814 scope.go:117] "RemoveContainer" containerID="a1b66f59edb2cb6d94773c5bbc84d8f6ba48116c66a23a839afdb0f8dfb6c93a" Dec 02 15:02:20 crc kubenswrapper[4814]: I1202 15:02:20.616548 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"24c70328-5be3-476b-8788-fcd8ba8bee33","Type":"ContainerStarted","Data":"74d52dce05ff7e453507a9e8cda0c9f9f095e65719693c3f0772e999706b4855"} Dec 02 15:02:23 crc kubenswrapper[4814]: I1202 15:02:23.645446 4814 generic.go:334] "Generic (PLEG): container finished" podID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerID="74d52dce05ff7e453507a9e8cda0c9f9f095e65719693c3f0772e999706b4855" exitCode=1 Dec 02 15:02:23 crc kubenswrapper[4814]: I1202 15:02:23.645525 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"24c70328-5be3-476b-8788-fcd8ba8bee33","Type":"ContainerDied","Data":"74d52dce05ff7e453507a9e8cda0c9f9f095e65719693c3f0772e999706b4855"} Dec 02 15:02:23 crc kubenswrapper[4814]: I1202 15:02:23.646075 4814 scope.go:117] "RemoveContainer" containerID="a1b66f59edb2cb6d94773c5bbc84d8f6ba48116c66a23a839afdb0f8dfb6c93a" Dec 02 15:02:23 crc kubenswrapper[4814]: I1202 15:02:23.646769 4814 scope.go:117] "RemoveContainer" containerID="74d52dce05ff7e453507a9e8cda0c9f9f095e65719693c3f0772e999706b4855" Dec 02 15:02:23 crc kubenswrapper[4814]: E1202 15:02:23.647004 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:02:27 crc kubenswrapper[4814]: I1202 15:02:27.127381 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:02:27 crc kubenswrapper[4814]: I1202 15:02:27.128041 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:02:27 crc kubenswrapper[4814]: I1202 15:02:27.128077 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:02:27 crc kubenswrapper[4814]: I1202 15:02:27.128126 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:02:27 crc kubenswrapper[4814]: I1202 15:02:27.128374 4814 scope.go:117] "RemoveContainer" containerID="74d52dce05ff7e453507a9e8cda0c9f9f095e65719693c3f0772e999706b4855" Dec 02 15:02:27 crc kubenswrapper[4814]: E1202 15:02:27.128699 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:02:27 crc kubenswrapper[4814]: I1202 15:02:27.686893 4814 scope.go:117] "RemoveContainer" containerID="74d52dce05ff7e453507a9e8cda0c9f9f095e65719693c3f0772e999706b4855" Dec 02 15:02:27 crc kubenswrapper[4814]: E1202 15:02:27.687634 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:02:39 crc kubenswrapper[4814]: I1202 15:02:39.144729 4814 scope.go:117] "RemoveContainer" containerID="74d52dce05ff7e453507a9e8cda0c9f9f095e65719693c3f0772e999706b4855" Dec 02 15:02:39 crc kubenswrapper[4814]: E1202 15:02:39.145624 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:02:40 crc kubenswrapper[4814]: I1202 15:02:40.279927 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7cm9p"] Dec 02 15:02:40 crc kubenswrapper[4814]: I1202 15:02:40.282945 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:02:40 crc kubenswrapper[4814]: I1202 15:02:40.308814 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7cm9p"] Dec 02 15:02:40 crc kubenswrapper[4814]: I1202 15:02:40.313474 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7tfw\" (UniqueName: \"kubernetes.io/projected/120c7deb-8f74-4975-839a-3f55e313587a-kube-api-access-b7tfw\") pod \"redhat-operators-7cm9p\" (UID: \"120c7deb-8f74-4975-839a-3f55e313587a\") " pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:02:40 crc kubenswrapper[4814]: I1202 15:02:40.313544 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120c7deb-8f74-4975-839a-3f55e313587a-catalog-content\") pod \"redhat-operators-7cm9p\" (UID: \"120c7deb-8f74-4975-839a-3f55e313587a\") " pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:02:40 crc kubenswrapper[4814]: I1202 15:02:40.313627 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120c7deb-8f74-4975-839a-3f55e313587a-utilities\") pod \"redhat-operators-7cm9p\" (UID: \"120c7deb-8f74-4975-839a-3f55e313587a\") " pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:02:40 crc kubenswrapper[4814]: I1202 15:02:40.414818 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7tfw\" (UniqueName: \"kubernetes.io/projected/120c7deb-8f74-4975-839a-3f55e313587a-kube-api-access-b7tfw\") pod \"redhat-operators-7cm9p\" (UID: \"120c7deb-8f74-4975-839a-3f55e313587a\") " pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:02:40 crc kubenswrapper[4814]: I1202 15:02:40.414874 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120c7deb-8f74-4975-839a-3f55e313587a-catalog-content\") pod \"redhat-operators-7cm9p\" (UID: \"120c7deb-8f74-4975-839a-3f55e313587a\") " pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:02:40 crc kubenswrapper[4814]: I1202 15:02:40.414922 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120c7deb-8f74-4975-839a-3f55e313587a-utilities\") pod \"redhat-operators-7cm9p\" (UID: \"120c7deb-8f74-4975-839a-3f55e313587a\") " pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:02:40 crc kubenswrapper[4814]: I1202 15:02:40.415401 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120c7deb-8f74-4975-839a-3f55e313587a-utilities\") pod \"redhat-operators-7cm9p\" (UID: \"120c7deb-8f74-4975-839a-3f55e313587a\") " pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:02:40 crc kubenswrapper[4814]: I1202 15:02:40.415503 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120c7deb-8f74-4975-839a-3f55e313587a-catalog-content\") pod \"redhat-operators-7cm9p\" (UID: \"120c7deb-8f74-4975-839a-3f55e313587a\") " pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:02:40 crc kubenswrapper[4814]: I1202 15:02:40.437969 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7tfw\" (UniqueName: \"kubernetes.io/projected/120c7deb-8f74-4975-839a-3f55e313587a-kube-api-access-b7tfw\") pod \"redhat-operators-7cm9p\" (UID: \"120c7deb-8f74-4975-839a-3f55e313587a\") " pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:02:40 crc kubenswrapper[4814]: I1202 15:02:40.615363 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:02:41 crc kubenswrapper[4814]: I1202 15:02:41.117890 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7cm9p"] Dec 02 15:02:41 crc kubenswrapper[4814]: W1202 15:02:41.118739 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod120c7deb_8f74_4975_839a_3f55e313587a.slice/crio-fe5e6dfb8591d5a6994a662818e581bc84f5e0004ffefd49f13212313930ae74 WatchSource:0}: Error finding container fe5e6dfb8591d5a6994a662818e581bc84f5e0004ffefd49f13212313930ae74: Status 404 returned error can't find the container with id fe5e6dfb8591d5a6994a662818e581bc84f5e0004ffefd49f13212313930ae74 Dec 02 15:02:41 crc kubenswrapper[4814]: I1202 15:02:41.804429 4814 generic.go:334] "Generic (PLEG): container finished" podID="120c7deb-8f74-4975-839a-3f55e313587a" containerID="90a4d1e11dbadcedd04682052836f6d98483595f480cab3a10870ef0dd63994a" exitCode=0 Dec 02 15:02:41 crc kubenswrapper[4814]: I1202 15:02:41.804696 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cm9p" event={"ID":"120c7deb-8f74-4975-839a-3f55e313587a","Type":"ContainerDied","Data":"90a4d1e11dbadcedd04682052836f6d98483595f480cab3a10870ef0dd63994a"} Dec 02 15:02:41 crc kubenswrapper[4814]: I1202 15:02:41.804807 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cm9p" event={"ID":"120c7deb-8f74-4975-839a-3f55e313587a","Type":"ContainerStarted","Data":"fe5e6dfb8591d5a6994a662818e581bc84f5e0004ffefd49f13212313930ae74"} Dec 02 15:02:41 crc kubenswrapper[4814]: I1202 15:02:41.807439 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:02:43 crc kubenswrapper[4814]: I1202 15:02:43.819656 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cm9p" event={"ID":"120c7deb-8f74-4975-839a-3f55e313587a","Type":"ContainerStarted","Data":"b1e70978470eef00f93be7b48178fde99216f0667f220ed77a112d8e72baccfb"} Dec 02 15:02:45 crc kubenswrapper[4814]: I1202 15:02:45.843428 4814 generic.go:334] "Generic (PLEG): container finished" podID="120c7deb-8f74-4975-839a-3f55e313587a" containerID="b1e70978470eef00f93be7b48178fde99216f0667f220ed77a112d8e72baccfb" exitCode=0 Dec 02 15:02:45 crc kubenswrapper[4814]: I1202 15:02:45.844346 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cm9p" event={"ID":"120c7deb-8f74-4975-839a-3f55e313587a","Type":"ContainerDied","Data":"b1e70978470eef00f93be7b48178fde99216f0667f220ed77a112d8e72baccfb"} Dec 02 15:02:46 crc kubenswrapper[4814]: I1202 15:02:46.854081 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cm9p" event={"ID":"120c7deb-8f74-4975-839a-3f55e313587a","Type":"ContainerStarted","Data":"8085fffdfe63ea50d8f0d4b63f02f0e4bab6c451db7ce29b20880c2ec7ddea62"} Dec 02 15:02:46 crc kubenswrapper[4814]: I1202 15:02:46.873740 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7cm9p" podStartSLOduration=2.205428468 podStartE2EDuration="6.873718737s" podCreationTimestamp="2025-12-02 15:02:40 +0000 UTC" firstStartedPulling="2025-12-02 15:02:41.807248605 +0000 UTC m=+2023.931610809" lastFinishedPulling="2025-12-02 15:02:46.475538864 +0000 UTC m=+2028.599901078" observedRunningTime="2025-12-02 15:02:46.873182143 +0000 UTC m=+2028.997544347" watchObservedRunningTime="2025-12-02 15:02:46.873718737 +0000 UTC m=+2028.998080941" Dec 02 15:02:50 crc kubenswrapper[4814]: I1202 15:02:50.615682 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:02:50 crc kubenswrapper[4814]: I1202 15:02:50.616289 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:02:51 crc kubenswrapper[4814]: I1202 15:02:51.690838 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7cm9p" podUID="120c7deb-8f74-4975-839a-3f55e313587a" containerName="registry-server" probeResult="failure" output=< Dec 02 15:02:51 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Dec 02 15:02:51 crc kubenswrapper[4814]: > Dec 02 15:02:54 crc kubenswrapper[4814]: I1202 15:02:54.144577 4814 scope.go:117] "RemoveContainer" containerID="74d52dce05ff7e453507a9e8cda0c9f9f095e65719693c3f0772e999706b4855" Dec 02 15:02:54 crc kubenswrapper[4814]: I1202 15:02:54.927991 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"24c70328-5be3-476b-8788-fcd8ba8bee33","Type":"ContainerStarted","Data":"0de32b8304c4e016b0e2a3ed1b55ff0eea7a5b8eb1b19ceadffd9f41ae32ff94"} Dec 02 15:02:57 crc kubenswrapper[4814]: I1202 15:02:57.118848 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:02:57 crc kubenswrapper[4814]: I1202 15:02:57.119525 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:02:57 crc kubenswrapper[4814]: I1202 15:02:57.157905 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:02:57 crc kubenswrapper[4814]: I1202 15:02:57.959178 4814 generic.go:334] "Generic (PLEG): container finished" podID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerID="0de32b8304c4e016b0e2a3ed1b55ff0eea7a5b8eb1b19ceadffd9f41ae32ff94" exitCode=1 Dec 02 15:02:57 crc kubenswrapper[4814]: I1202 15:02:57.959261 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"24c70328-5be3-476b-8788-fcd8ba8bee33","Type":"ContainerDied","Data":"0de32b8304c4e016b0e2a3ed1b55ff0eea7a5b8eb1b19ceadffd9f41ae32ff94"} Dec 02 15:02:57 crc kubenswrapper[4814]: I1202 15:02:57.959654 4814 scope.go:117] "RemoveContainer" containerID="74d52dce05ff7e453507a9e8cda0c9f9f095e65719693c3f0772e999706b4855" Dec 02 15:02:57 crc kubenswrapper[4814]: I1202 15:02:57.959699 4814 scope.go:117] "RemoveContainer" containerID="0de32b8304c4e016b0e2a3ed1b55ff0eea7a5b8eb1b19ceadffd9f41ae32ff94" Dec 02 15:02:57 crc kubenswrapper[4814]: E1202 15:02:57.959962 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:02:58 crc kubenswrapper[4814]: I1202 15:02:58.987138 4814 scope.go:117] "RemoveContainer" containerID="0de32b8304c4e016b0e2a3ed1b55ff0eea7a5b8eb1b19ceadffd9f41ae32ff94" Dec 02 15:02:58 crc kubenswrapper[4814]: E1202 15:02:58.987902 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:03:00 crc kubenswrapper[4814]: I1202 15:03:00.101375 4814 scope.go:117] "RemoveContainer" containerID="0e71b8be9ffc3c072587d25e2828ce19c0737a6866f015a18f332fd496f5a05b" Dec 02 15:03:00 crc kubenswrapper[4814]: I1202 15:03:00.127641 4814 scope.go:117] "RemoveContainer" containerID="8f2c70b6e17fc31fc4a291ca17fadeed772c02e55ac6cab2636fc120c9cb0a37" Dec 02 15:03:00 crc kubenswrapper[4814]: I1202 15:03:00.162371 4814 scope.go:117] "RemoveContainer" containerID="7160936f52c3dea03466a12c83f00fd0b2c320be504dda159a5a1e7fc033486d" Dec 02 15:03:00 crc kubenswrapper[4814]: I1202 15:03:00.204436 4814 scope.go:117] "RemoveContainer" containerID="5967076ccf25083a4bd2fabab9342a998c5512ac75d353838dfda406344752f6" Dec 02 15:03:00 crc kubenswrapper[4814]: I1202 15:03:00.236610 4814 scope.go:117] "RemoveContainer" containerID="07456a08ba900247c581217281c5f48adaa0ce2edaf83475e654efc2dcc9f734" Dec 02 15:03:00 crc kubenswrapper[4814]: I1202 15:03:00.667570 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:03:00 crc kubenswrapper[4814]: I1202 15:03:00.756957 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:03:04 crc kubenswrapper[4814]: I1202 15:03:04.861828 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7cm9p"] Dec 02 15:03:04 crc kubenswrapper[4814]: I1202 15:03:04.862618 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7cm9p" podUID="120c7deb-8f74-4975-839a-3f55e313587a" containerName="registry-server" containerID="cri-o://8085fffdfe63ea50d8f0d4b63f02f0e4bab6c451db7ce29b20880c2ec7ddea62" gracePeriod=2 Dec 02 15:03:05 crc kubenswrapper[4814]: I1202 15:03:05.051320 4814 generic.go:334] "Generic (PLEG): container finished" podID="120c7deb-8f74-4975-839a-3f55e313587a" containerID="8085fffdfe63ea50d8f0d4b63f02f0e4bab6c451db7ce29b20880c2ec7ddea62" exitCode=0 Dec 02 15:03:05 crc kubenswrapper[4814]: I1202 15:03:05.051356 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cm9p" event={"ID":"120c7deb-8f74-4975-839a-3f55e313587a","Type":"ContainerDied","Data":"8085fffdfe63ea50d8f0d4b63f02f0e4bab6c451db7ce29b20880c2ec7ddea62"} Dec 02 15:03:05 crc kubenswrapper[4814]: I1202 15:03:05.358478 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:03:05 crc kubenswrapper[4814]: I1202 15:03:05.419893 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7tfw\" (UniqueName: \"kubernetes.io/projected/120c7deb-8f74-4975-839a-3f55e313587a-kube-api-access-b7tfw\") pod \"120c7deb-8f74-4975-839a-3f55e313587a\" (UID: \"120c7deb-8f74-4975-839a-3f55e313587a\") " Dec 02 15:03:05 crc kubenswrapper[4814]: I1202 15:03:05.419996 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120c7deb-8f74-4975-839a-3f55e313587a-catalog-content\") pod \"120c7deb-8f74-4975-839a-3f55e313587a\" (UID: \"120c7deb-8f74-4975-839a-3f55e313587a\") " Dec 02 15:03:05 crc kubenswrapper[4814]: I1202 15:03:05.420109 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120c7deb-8f74-4975-839a-3f55e313587a-utilities\") pod \"120c7deb-8f74-4975-839a-3f55e313587a\" (UID: \"120c7deb-8f74-4975-839a-3f55e313587a\") " Dec 02 15:03:05 crc kubenswrapper[4814]: I1202 15:03:05.421259 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120c7deb-8f74-4975-839a-3f55e313587a-utilities" (OuterVolumeSpecName: "utilities") pod "120c7deb-8f74-4975-839a-3f55e313587a" (UID: "120c7deb-8f74-4975-839a-3f55e313587a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:03:05 crc kubenswrapper[4814]: I1202 15:03:05.429913 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120c7deb-8f74-4975-839a-3f55e313587a-kube-api-access-b7tfw" (OuterVolumeSpecName: "kube-api-access-b7tfw") pod "120c7deb-8f74-4975-839a-3f55e313587a" (UID: "120c7deb-8f74-4975-839a-3f55e313587a"). InnerVolumeSpecName "kube-api-access-b7tfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:03:05 crc kubenswrapper[4814]: I1202 15:03:05.516276 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120c7deb-8f74-4975-839a-3f55e313587a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "120c7deb-8f74-4975-839a-3f55e313587a" (UID: "120c7deb-8f74-4975-839a-3f55e313587a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:03:05 crc kubenswrapper[4814]: I1202 15:03:05.522822 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7tfw\" (UniqueName: \"kubernetes.io/projected/120c7deb-8f74-4975-839a-3f55e313587a-kube-api-access-b7tfw\") on node \"crc\" DevicePath \"\"" Dec 02 15:03:05 crc kubenswrapper[4814]: I1202 15:03:05.522846 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120c7deb-8f74-4975-839a-3f55e313587a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:03:05 crc kubenswrapper[4814]: I1202 15:03:05.522855 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120c7deb-8f74-4975-839a-3f55e313587a-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.064730 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cm9p" event={"ID":"120c7deb-8f74-4975-839a-3f55e313587a","Type":"ContainerDied","Data":"fe5e6dfb8591d5a6994a662818e581bc84f5e0004ffefd49f13212313930ae74"} Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.064805 4814 scope.go:117] "RemoveContainer" containerID="8085fffdfe63ea50d8f0d4b63f02f0e4bab6c451db7ce29b20880c2ec7ddea62" Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.064807 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7cm9p" Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.089278 4814 scope.go:117] "RemoveContainer" containerID="b1e70978470eef00f93be7b48178fde99216f0667f220ed77a112d8e72baccfb" Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.114123 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7cm9p"] Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.120920 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7cm9p"] Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.138578 4814 scope.go:117] "RemoveContainer" containerID="90a4d1e11dbadcedd04682052836f6d98483595f480cab3a10870ef0dd63994a" Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.167049 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="120c7deb-8f74-4975-839a-3f55e313587a" path="/var/lib/kubelet/pods/120c7deb-8f74-4975-839a-3f55e313587a/volumes" Dec 02 15:03:06 crc kubenswrapper[4814]: E1202 15:03:06.216389 4814 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod120c7deb_8f74_4975_839a_3f55e313587a.slice\": RecentStats: unable to find data in memory cache]" Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.874861 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lw6sp"] Dec 02 15:03:06 crc kubenswrapper[4814]: E1202 15:03:06.875623 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120c7deb-8f74-4975-839a-3f55e313587a" containerName="extract-utilities" Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.875647 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="120c7deb-8f74-4975-839a-3f55e313587a" containerName="extract-utilities" Dec 02 15:03:06 crc kubenswrapper[4814]: E1202 15:03:06.875669 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120c7deb-8f74-4975-839a-3f55e313587a" containerName="registry-server" Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.875677 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="120c7deb-8f74-4975-839a-3f55e313587a" containerName="registry-server" Dec 02 15:03:06 crc kubenswrapper[4814]: E1202 15:03:06.875690 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120c7deb-8f74-4975-839a-3f55e313587a" containerName="extract-content" Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.875699 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="120c7deb-8f74-4975-839a-3f55e313587a" containerName="extract-content" Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.875905 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="120c7deb-8f74-4975-839a-3f55e313587a" containerName="registry-server" Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.877650 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.889668 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lw6sp"] Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.946301 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab54ec8-82e0-4e9f-b739-b67393640bc8-catalog-content\") pod \"certified-operators-lw6sp\" (UID: \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\") " pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.946368 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab54ec8-82e0-4e9f-b739-b67393640bc8-utilities\") pod \"certified-operators-lw6sp\" (UID: \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\") " pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:06 crc kubenswrapper[4814]: I1202 15:03:06.946435 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d77lc\" (UniqueName: \"kubernetes.io/projected/4ab54ec8-82e0-4e9f-b739-b67393640bc8-kube-api-access-d77lc\") pod \"certified-operators-lw6sp\" (UID: \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\") " pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:07 crc kubenswrapper[4814]: I1202 15:03:07.048614 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab54ec8-82e0-4e9f-b739-b67393640bc8-catalog-content\") pod \"certified-operators-lw6sp\" (UID: \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\") " pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:07 crc kubenswrapper[4814]: I1202 15:03:07.048680 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab54ec8-82e0-4e9f-b739-b67393640bc8-utilities\") pod \"certified-operators-lw6sp\" (UID: \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\") " pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:07 crc kubenswrapper[4814]: I1202 15:03:07.048717 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d77lc\" (UniqueName: \"kubernetes.io/projected/4ab54ec8-82e0-4e9f-b739-b67393640bc8-kube-api-access-d77lc\") pod \"certified-operators-lw6sp\" (UID: \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\") " pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:07 crc kubenswrapper[4814]: I1202 15:03:07.049103 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab54ec8-82e0-4e9f-b739-b67393640bc8-catalog-content\") pod \"certified-operators-lw6sp\" (UID: \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\") " pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:07 crc kubenswrapper[4814]: I1202 15:03:07.049361 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab54ec8-82e0-4e9f-b739-b67393640bc8-utilities\") pod \"certified-operators-lw6sp\" (UID: \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\") " pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:07 crc kubenswrapper[4814]: I1202 15:03:07.070993 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d77lc\" (UniqueName: \"kubernetes.io/projected/4ab54ec8-82e0-4e9f-b739-b67393640bc8-kube-api-access-d77lc\") pod \"certified-operators-lw6sp\" (UID: \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\") " pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:07 crc kubenswrapper[4814]: I1202 15:03:07.118376 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:03:07 crc kubenswrapper[4814]: I1202 15:03:07.119058 4814 scope.go:117] "RemoveContainer" containerID="0de32b8304c4e016b0e2a3ed1b55ff0eea7a5b8eb1b19ceadffd9f41ae32ff94" Dec 02 15:03:07 crc kubenswrapper[4814]: E1202 15:03:07.119384 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:03:07 crc kubenswrapper[4814]: I1202 15:03:07.204499 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:07 crc kubenswrapper[4814]: I1202 15:03:07.658197 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lw6sp"] Dec 02 15:03:08 crc kubenswrapper[4814]: I1202 15:03:08.096511 4814 generic.go:334] "Generic (PLEG): container finished" podID="4ab54ec8-82e0-4e9f-b739-b67393640bc8" containerID="8eb3d7167881c4df371d4879a5b21afcf8284854268caebed51cafcce4866e25" exitCode=0 Dec 02 15:03:08 crc kubenswrapper[4814]: I1202 15:03:08.096734 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lw6sp" event={"ID":"4ab54ec8-82e0-4e9f-b739-b67393640bc8","Type":"ContainerDied","Data":"8eb3d7167881c4df371d4879a5b21afcf8284854268caebed51cafcce4866e25"} Dec 02 15:03:08 crc kubenswrapper[4814]: I1202 15:03:08.096810 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lw6sp" event={"ID":"4ab54ec8-82e0-4e9f-b739-b67393640bc8","Type":"ContainerStarted","Data":"1e4ef77f226b71110a36a31976e89add287f58449466e3ca87ed1a7b77977df5"} Dec 02 15:03:09 crc kubenswrapper[4814]: I1202 15:03:09.106031 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lw6sp" event={"ID":"4ab54ec8-82e0-4e9f-b739-b67393640bc8","Type":"ContainerStarted","Data":"6e1278afa64de9a27c878969bbb9101a3f54b152c7a92b6f9f99ff1a779a201f"} Dec 02 15:03:10 crc kubenswrapper[4814]: I1202 15:03:10.115309 4814 generic.go:334] "Generic (PLEG): container finished" podID="4ab54ec8-82e0-4e9f-b739-b67393640bc8" containerID="6e1278afa64de9a27c878969bbb9101a3f54b152c7a92b6f9f99ff1a779a201f" exitCode=0 Dec 02 15:03:10 crc kubenswrapper[4814]: I1202 15:03:10.115350 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lw6sp" event={"ID":"4ab54ec8-82e0-4e9f-b739-b67393640bc8","Type":"ContainerDied","Data":"6e1278afa64de9a27c878969bbb9101a3f54b152c7a92b6f9f99ff1a779a201f"} Dec 02 15:03:11 crc kubenswrapper[4814]: I1202 15:03:11.138073 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lw6sp" event={"ID":"4ab54ec8-82e0-4e9f-b739-b67393640bc8","Type":"ContainerStarted","Data":"dc1d1323371446183ec67885412fcbbca73f4acf327fd03b7beb45278346ec87"} Dec 02 15:03:11 crc kubenswrapper[4814]: I1202 15:03:11.157799 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lw6sp" podStartSLOduration=2.68258832 podStartE2EDuration="5.157783498s" podCreationTimestamp="2025-12-02 15:03:06 +0000 UTC" firstStartedPulling="2025-12-02 15:03:08.10273538 +0000 UTC m=+2050.227097624" lastFinishedPulling="2025-12-02 15:03:10.577930598 +0000 UTC m=+2052.702292802" observedRunningTime="2025-12-02 15:03:11.157768078 +0000 UTC m=+2053.282130282" watchObservedRunningTime="2025-12-02 15:03:11.157783498 +0000 UTC m=+2053.282145702" Dec 02 15:03:17 crc kubenswrapper[4814]: I1202 15:03:17.204750 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:17 crc kubenswrapper[4814]: I1202 15:03:17.205295 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:17 crc kubenswrapper[4814]: I1202 15:03:17.275735 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:18 crc kubenswrapper[4814]: I1202 15:03:18.279671 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:20 crc kubenswrapper[4814]: I1202 15:03:20.651531 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:03:20 crc kubenswrapper[4814]: I1202 15:03:20.651876 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:03:20 crc kubenswrapper[4814]: I1202 15:03:20.862100 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lw6sp"] Dec 02 15:03:20 crc kubenswrapper[4814]: I1202 15:03:20.862386 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lw6sp" podUID="4ab54ec8-82e0-4e9f-b739-b67393640bc8" containerName="registry-server" containerID="cri-o://dc1d1323371446183ec67885412fcbbca73f4acf327fd03b7beb45278346ec87" gracePeriod=2 Dec 02 15:03:21 crc kubenswrapper[4814]: I1202 15:03:21.145360 4814 scope.go:117] "RemoveContainer" containerID="0de32b8304c4e016b0e2a3ed1b55ff0eea7a5b8eb1b19ceadffd9f41ae32ff94" Dec 02 15:03:21 crc kubenswrapper[4814]: E1202 15:03:21.145976 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:03:22 crc kubenswrapper[4814]: I1202 15:03:22.258771 4814 generic.go:334] "Generic (PLEG): container finished" podID="4ab54ec8-82e0-4e9f-b739-b67393640bc8" containerID="dc1d1323371446183ec67885412fcbbca73f4acf327fd03b7beb45278346ec87" exitCode=0 Dec 02 15:03:22 crc kubenswrapper[4814]: I1202 15:03:22.259138 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lw6sp" event={"ID":"4ab54ec8-82e0-4e9f-b739-b67393640bc8","Type":"ContainerDied","Data":"dc1d1323371446183ec67885412fcbbca73f4acf327fd03b7beb45278346ec87"} Dec 02 15:03:22 crc kubenswrapper[4814]: I1202 15:03:22.529000 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:22 crc kubenswrapper[4814]: I1202 15:03:22.628349 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab54ec8-82e0-4e9f-b739-b67393640bc8-utilities\") pod \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\" (UID: \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\") " Dec 02 15:03:22 crc kubenswrapper[4814]: I1202 15:03:22.628442 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab54ec8-82e0-4e9f-b739-b67393640bc8-catalog-content\") pod \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\" (UID: \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\") " Dec 02 15:03:22 crc kubenswrapper[4814]: I1202 15:03:22.628505 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d77lc\" (UniqueName: \"kubernetes.io/projected/4ab54ec8-82e0-4e9f-b739-b67393640bc8-kube-api-access-d77lc\") pod \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\" (UID: \"4ab54ec8-82e0-4e9f-b739-b67393640bc8\") " Dec 02 15:03:22 crc kubenswrapper[4814]: I1202 15:03:22.630082 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ab54ec8-82e0-4e9f-b739-b67393640bc8-utilities" (OuterVolumeSpecName: "utilities") pod "4ab54ec8-82e0-4e9f-b739-b67393640bc8" (UID: "4ab54ec8-82e0-4e9f-b739-b67393640bc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:03:22 crc kubenswrapper[4814]: I1202 15:03:22.633849 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab54ec8-82e0-4e9f-b739-b67393640bc8-kube-api-access-d77lc" (OuterVolumeSpecName: "kube-api-access-d77lc") pod "4ab54ec8-82e0-4e9f-b739-b67393640bc8" (UID: "4ab54ec8-82e0-4e9f-b739-b67393640bc8"). InnerVolumeSpecName "kube-api-access-d77lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:03:22 crc kubenswrapper[4814]: I1202 15:03:22.713388 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ab54ec8-82e0-4e9f-b739-b67393640bc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ab54ec8-82e0-4e9f-b739-b67393640bc8" (UID: "4ab54ec8-82e0-4e9f-b739-b67393640bc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:03:22 crc kubenswrapper[4814]: I1202 15:03:22.730299 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab54ec8-82e0-4e9f-b739-b67393640bc8-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:03:22 crc kubenswrapper[4814]: I1202 15:03:22.730347 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab54ec8-82e0-4e9f-b739-b67393640bc8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:03:22 crc kubenswrapper[4814]: I1202 15:03:22.730364 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d77lc\" (UniqueName: \"kubernetes.io/projected/4ab54ec8-82e0-4e9f-b739-b67393640bc8-kube-api-access-d77lc\") on node \"crc\" DevicePath \"\"" Dec 02 15:03:23 crc kubenswrapper[4814]: I1202 15:03:23.270529 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lw6sp" event={"ID":"4ab54ec8-82e0-4e9f-b739-b67393640bc8","Type":"ContainerDied","Data":"1e4ef77f226b71110a36a31976e89add287f58449466e3ca87ed1a7b77977df5"} Dec 02 15:03:23 crc kubenswrapper[4814]: I1202 15:03:23.270882 4814 scope.go:117] "RemoveContainer" containerID="dc1d1323371446183ec67885412fcbbca73f4acf327fd03b7beb45278346ec87" Dec 02 15:03:23 crc kubenswrapper[4814]: I1202 15:03:23.270573 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lw6sp" Dec 02 15:03:23 crc kubenswrapper[4814]: I1202 15:03:23.302165 4814 scope.go:117] "RemoveContainer" containerID="6e1278afa64de9a27c878969bbb9101a3f54b152c7a92b6f9f99ff1a779a201f" Dec 02 15:03:23 crc kubenswrapper[4814]: I1202 15:03:23.321263 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lw6sp"] Dec 02 15:03:23 crc kubenswrapper[4814]: I1202 15:03:23.330761 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lw6sp"] Dec 02 15:03:23 crc kubenswrapper[4814]: I1202 15:03:23.335008 4814 scope.go:117] "RemoveContainer" containerID="8eb3d7167881c4df371d4879a5b21afcf8284854268caebed51cafcce4866e25" Dec 02 15:03:24 crc kubenswrapper[4814]: I1202 15:03:24.159801 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab54ec8-82e0-4e9f-b739-b67393640bc8" path="/var/lib/kubelet/pods/4ab54ec8-82e0-4e9f-b739-b67393640bc8/volumes" Dec 02 15:03:27 crc kubenswrapper[4814]: I1202 15:03:27.119347 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:03:27 crc kubenswrapper[4814]: I1202 15:03:27.121144 4814 scope.go:117] "RemoveContainer" containerID="0de32b8304c4e016b0e2a3ed1b55ff0eea7a5b8eb1b19ceadffd9f41ae32ff94" Dec 02 15:03:27 crc kubenswrapper[4814]: E1202 15:03:27.121654 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:03:38 crc kubenswrapper[4814]: I1202 15:03:38.168382 4814 scope.go:117] "RemoveContainer" containerID="0de32b8304c4e016b0e2a3ed1b55ff0eea7a5b8eb1b19ceadffd9f41ae32ff94" Dec 02 15:03:38 crc kubenswrapper[4814]: I1202 15:03:38.454574 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"24c70328-5be3-476b-8788-fcd8ba8bee33","Type":"ContainerStarted","Data":"27880a4c0bfd552ffcdc15f8bd4bbdfa49333ac6f76d17c888c47af25812bf22"} Dec 02 15:03:41 crc kubenswrapper[4814]: I1202 15:03:41.479137 4814 generic.go:334] "Generic (PLEG): container finished" podID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerID="27880a4c0bfd552ffcdc15f8bd4bbdfa49333ac6f76d17c888c47af25812bf22" exitCode=1 Dec 02 15:03:41 crc kubenswrapper[4814]: I1202 15:03:41.479220 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"24c70328-5be3-476b-8788-fcd8ba8bee33","Type":"ContainerDied","Data":"27880a4c0bfd552ffcdc15f8bd4bbdfa49333ac6f76d17c888c47af25812bf22"} Dec 02 15:03:41 crc kubenswrapper[4814]: I1202 15:03:41.479454 4814 scope.go:117] "RemoveContainer" containerID="0de32b8304c4e016b0e2a3ed1b55ff0eea7a5b8eb1b19ceadffd9f41ae32ff94" Dec 02 15:03:41 crc kubenswrapper[4814]: I1202 15:03:41.479995 4814 scope.go:117] "RemoveContainer" containerID="27880a4c0bfd552ffcdc15f8bd4bbdfa49333ac6f76d17c888c47af25812bf22" Dec 02 15:03:41 crc kubenswrapper[4814]: E1202 15:03:41.480196 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:03:47 crc kubenswrapper[4814]: I1202 15:03:47.118822 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:03:47 crc kubenswrapper[4814]: I1202 15:03:47.119242 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:03:47 crc kubenswrapper[4814]: I1202 15:03:47.119851 4814 scope.go:117] "RemoveContainer" containerID="27880a4c0bfd552ffcdc15f8bd4bbdfa49333ac6f76d17c888c47af25812bf22" Dec 02 15:03:47 crc kubenswrapper[4814]: E1202 15:03:47.120059 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:03:50 crc kubenswrapper[4814]: I1202 15:03:50.651038 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:03:50 crc kubenswrapper[4814]: I1202 15:03:50.651719 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:03:57 crc kubenswrapper[4814]: I1202 15:03:57.118383 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:03:57 crc kubenswrapper[4814]: I1202 15:03:57.118846 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:03:57 crc kubenswrapper[4814]: I1202 15:03:57.119439 4814 scope.go:117] "RemoveContainer" containerID="27880a4c0bfd552ffcdc15f8bd4bbdfa49333ac6f76d17c888c47af25812bf22" Dec 02 15:03:57 crc kubenswrapper[4814]: E1202 15:03:57.119636 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:04:10 crc kubenswrapper[4814]: I1202 15:04:10.145314 4814 scope.go:117] "RemoveContainer" containerID="27880a4c0bfd552ffcdc15f8bd4bbdfa49333ac6f76d17c888c47af25812bf22" Dec 02 15:04:10 crc kubenswrapper[4814]: E1202 15:04:10.145955 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:04:20 crc kubenswrapper[4814]: I1202 15:04:20.651336 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:04:20 crc kubenswrapper[4814]: I1202 15:04:20.651925 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:04:20 crc kubenswrapper[4814]: I1202 15:04:20.651974 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 15:04:20 crc kubenswrapper[4814]: I1202 15:04:20.652693 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"908761a95c0f9f6447a8c26806b20d7573f99def1bcbce124d6957c579039255"} pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:04:20 crc kubenswrapper[4814]: I1202 15:04:20.652747 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" containerID="cri-o://908761a95c0f9f6447a8c26806b20d7573f99def1bcbce124d6957c579039255" gracePeriod=600 Dec 02 15:04:20 crc kubenswrapper[4814]: I1202 15:04:20.839865 4814 generic.go:334] "Generic (PLEG): container finished" podID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerID="908761a95c0f9f6447a8c26806b20d7573f99def1bcbce124d6957c579039255" exitCode=0 Dec 02 15:04:20 crc kubenswrapper[4814]: I1202 15:04:20.839906 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerDied","Data":"908761a95c0f9f6447a8c26806b20d7573f99def1bcbce124d6957c579039255"} Dec 02 15:04:20 crc kubenswrapper[4814]: I1202 15:04:20.839936 4814 scope.go:117] "RemoveContainer" containerID="d1df6ba0cd3e121b6aa598458ada230ed0b89d92718e6ff0b1773d63dda742aa" Dec 02 15:04:21 crc kubenswrapper[4814]: I1202 15:04:21.853333 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584"} Dec 02 15:04:22 crc kubenswrapper[4814]: I1202 15:04:22.145400 4814 scope.go:117] "RemoveContainer" containerID="27880a4c0bfd552ffcdc15f8bd4bbdfa49333ac6f76d17c888c47af25812bf22" Dec 02 15:04:22 crc kubenswrapper[4814]: E1202 15:04:22.145606 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:04:34 crc kubenswrapper[4814]: I1202 15:04:34.145414 4814 scope.go:117] "RemoveContainer" containerID="27880a4c0bfd552ffcdc15f8bd4bbdfa49333ac6f76d17c888c47af25812bf22" Dec 02 15:04:34 crc kubenswrapper[4814]: E1202 15:04:34.146034 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:04:46 crc kubenswrapper[4814]: I1202 15:04:46.146039 4814 scope.go:117] "RemoveContainer" containerID="27880a4c0bfd552ffcdc15f8bd4bbdfa49333ac6f76d17c888c47af25812bf22" Dec 02 15:04:46 crc kubenswrapper[4814]: E1202 15:04:46.147593 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:05:01 crc kubenswrapper[4814]: I1202 15:05:01.145484 4814 scope.go:117] "RemoveContainer" containerID="27880a4c0bfd552ffcdc15f8bd4bbdfa49333ac6f76d17c888c47af25812bf22" Dec 02 15:05:02 crc kubenswrapper[4814]: I1202 15:05:02.186895 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"24c70328-5be3-476b-8788-fcd8ba8bee33","Type":"ContainerStarted","Data":"376a9ab6c2d9674333d887cad80c4852477321a303aeeea27a5b6bd974830465"} Dec 02 15:05:04 crc kubenswrapper[4814]: I1202 15:05:04.209423 4814 generic.go:334] "Generic (PLEG): container finished" podID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerID="376a9ab6c2d9674333d887cad80c4852477321a303aeeea27a5b6bd974830465" exitCode=1 Dec 02 15:05:04 crc kubenswrapper[4814]: I1202 15:05:04.209496 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"24c70328-5be3-476b-8788-fcd8ba8bee33","Type":"ContainerDied","Data":"376a9ab6c2d9674333d887cad80c4852477321a303aeeea27a5b6bd974830465"} Dec 02 15:05:04 crc kubenswrapper[4814]: I1202 15:05:04.209690 4814 scope.go:117] "RemoveContainer" containerID="27880a4c0bfd552ffcdc15f8bd4bbdfa49333ac6f76d17c888c47af25812bf22" Dec 02 15:05:04 crc kubenswrapper[4814]: I1202 15:05:04.210161 4814 scope.go:117] "RemoveContainer" containerID="376a9ab6c2d9674333d887cad80c4852477321a303aeeea27a5b6bd974830465" Dec 02 15:05:04 crc kubenswrapper[4814]: E1202 15:05:04.210394 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:05:07 crc kubenswrapper[4814]: I1202 15:05:07.118651 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:05:07 crc kubenswrapper[4814]: I1202 15:05:07.118993 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:05:07 crc kubenswrapper[4814]: I1202 15:05:07.119904 4814 scope.go:117] "RemoveContainer" containerID="376a9ab6c2d9674333d887cad80c4852477321a303aeeea27a5b6bd974830465" Dec 02 15:05:07 crc kubenswrapper[4814]: E1202 15:05:07.120383 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:05:21 crc kubenswrapper[4814]: I1202 15:05:21.144529 4814 scope.go:117] "RemoveContainer" containerID="376a9ab6c2d9674333d887cad80c4852477321a303aeeea27a5b6bd974830465" Dec 02 15:05:21 crc kubenswrapper[4814]: E1202 15:05:21.145177 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:05:27 crc kubenswrapper[4814]: I1202 15:05:27.118565 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:05:27 crc kubenswrapper[4814]: I1202 15:05:27.120404 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:05:27 crc kubenswrapper[4814]: I1202 15:05:27.121097 4814 scope.go:117] "RemoveContainer" containerID="376a9ab6c2d9674333d887cad80c4852477321a303aeeea27a5b6bd974830465" Dec 02 15:05:27 crc kubenswrapper[4814]: E1202 15:05:27.121393 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:05:27 crc kubenswrapper[4814]: I1202 15:05:27.426033 4814 scope.go:117] "RemoveContainer" containerID="376a9ab6c2d9674333d887cad80c4852477321a303aeeea27a5b6bd974830465" Dec 02 15:05:27 crc kubenswrapper[4814]: E1202 15:05:27.426494 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:05:42 crc kubenswrapper[4814]: I1202 15:05:42.145021 4814 scope.go:117] "RemoveContainer" containerID="376a9ab6c2d9674333d887cad80c4852477321a303aeeea27a5b6bd974830465" Dec 02 15:05:42 crc kubenswrapper[4814]: E1202 15:05:42.146799 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:05:55 crc kubenswrapper[4814]: I1202 15:05:55.145417 4814 scope.go:117] "RemoveContainer" containerID="376a9ab6c2d9674333d887cad80c4852477321a303aeeea27a5b6bd974830465" Dec 02 15:05:55 crc kubenswrapper[4814]: E1202 15:05:55.146210 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:06:08 crc kubenswrapper[4814]: I1202 15:06:08.151894 4814 scope.go:117] "RemoveContainer" containerID="376a9ab6c2d9674333d887cad80c4852477321a303aeeea27a5b6bd974830465" Dec 02 15:06:08 crc kubenswrapper[4814]: E1202 15:06:08.152533 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:06:19 crc kubenswrapper[4814]: I1202 15:06:19.144461 4814 scope.go:117] "RemoveContainer" containerID="376a9ab6c2d9674333d887cad80c4852477321a303aeeea27a5b6bd974830465" Dec 02 15:06:19 crc kubenswrapper[4814]: E1202 15:06:19.145242 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.523993 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9blqc"] Dec 02 15:06:29 crc kubenswrapper[4814]: E1202 15:06:29.526177 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab54ec8-82e0-4e9f-b739-b67393640bc8" containerName="extract-utilities" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.526402 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab54ec8-82e0-4e9f-b739-b67393640bc8" containerName="extract-utilities" Dec 02 15:06:29 crc kubenswrapper[4814]: E1202 15:06:29.526520 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab54ec8-82e0-4e9f-b739-b67393640bc8" containerName="extract-content" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.526615 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab54ec8-82e0-4e9f-b739-b67393640bc8" containerName="extract-content" Dec 02 15:06:29 crc kubenswrapper[4814]: E1202 15:06:29.526754 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab54ec8-82e0-4e9f-b739-b67393640bc8" containerName="registry-server" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.526850 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab54ec8-82e0-4e9f-b739-b67393640bc8" containerName="registry-server" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.527199 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab54ec8-82e0-4e9f-b739-b67393640bc8" containerName="registry-server" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.536032 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9blqc"] Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.536253 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.663291 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-utilities\") pod \"redhat-marketplace-9blqc\" (UID: \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\") " pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.663753 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-catalog-content\") pod \"redhat-marketplace-9blqc\" (UID: \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\") " pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.663821 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4hsz\" (UniqueName: \"kubernetes.io/projected/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-kube-api-access-b4hsz\") pod \"redhat-marketplace-9blqc\" (UID: \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\") " pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.765637 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4hsz\" (UniqueName: \"kubernetes.io/projected/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-kube-api-access-b4hsz\") pod \"redhat-marketplace-9blqc\" (UID: \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\") " pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.765793 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-utilities\") pod \"redhat-marketplace-9blqc\" (UID: \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\") " pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.765907 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-catalog-content\") pod \"redhat-marketplace-9blqc\" (UID: \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\") " pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.766222 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-utilities\") pod \"redhat-marketplace-9blqc\" (UID: \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\") " pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.766514 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-catalog-content\") pod \"redhat-marketplace-9blqc\" (UID: \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\") " pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.803894 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4hsz\" (UniqueName: \"kubernetes.io/projected/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-kube-api-access-b4hsz\") pod \"redhat-marketplace-9blqc\" (UID: \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\") " pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:29 crc kubenswrapper[4814]: I1202 15:06:29.861711 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:30 crc kubenswrapper[4814]: I1202 15:06:30.319415 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9blqc"] Dec 02 15:06:31 crc kubenswrapper[4814]: I1202 15:06:31.017706 4814 generic.go:334] "Generic (PLEG): container finished" podID="09d4fd83-ac02-42af-8ada-49cd9e87fd9f" containerID="8e52980b5cfc4a6ce7cc59efecb9c088bae2c8b7f9d51fe9224cd2f8e91090de" exitCode=0 Dec 02 15:06:31 crc kubenswrapper[4814]: I1202 15:06:31.018079 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9blqc" event={"ID":"09d4fd83-ac02-42af-8ada-49cd9e87fd9f","Type":"ContainerDied","Data":"8e52980b5cfc4a6ce7cc59efecb9c088bae2c8b7f9d51fe9224cd2f8e91090de"} Dec 02 15:06:31 crc kubenswrapper[4814]: I1202 15:06:31.018107 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9blqc" event={"ID":"09d4fd83-ac02-42af-8ada-49cd9e87fd9f","Type":"ContainerStarted","Data":"72b39fa58e83d92f7ae9f76db880540b258db16914d0b44a7a7aa1d72205e216"} Dec 02 15:06:31 crc kubenswrapper[4814]: I1202 15:06:31.145470 4814 scope.go:117] "RemoveContainer" containerID="376a9ab6c2d9674333d887cad80c4852477321a303aeeea27a5b6bd974830465" Dec 02 15:06:31 crc kubenswrapper[4814]: E1202 15:06:31.146078 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(24c70328-5be3-476b-8788-fcd8ba8bee33)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" Dec 02 15:06:32 crc kubenswrapper[4814]: I1202 15:06:32.030085 4814 generic.go:334] "Generic (PLEG): container finished" podID="09d4fd83-ac02-42af-8ada-49cd9e87fd9f" containerID="1aadf9df14b6b5e5f150cebe21b40941f3a8183bf202e91a15c7029ea3cb7735" exitCode=0 Dec 02 15:06:32 crc kubenswrapper[4814]: I1202 15:06:32.030479 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9blqc" event={"ID":"09d4fd83-ac02-42af-8ada-49cd9e87fd9f","Type":"ContainerDied","Data":"1aadf9df14b6b5e5f150cebe21b40941f3a8183bf202e91a15c7029ea3cb7735"} Dec 02 15:06:33 crc kubenswrapper[4814]: I1202 15:06:33.043629 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9blqc" event={"ID":"09d4fd83-ac02-42af-8ada-49cd9e87fd9f","Type":"ContainerStarted","Data":"2e1c3c72eeba162226952afeed2fc473ebf6af9f2ce82a076d1b20a2af7ab47e"} Dec 02 15:06:33 crc kubenswrapper[4814]: I1202 15:06:33.066673 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9blqc" podStartSLOduration=2.508523683 podStartE2EDuration="4.0666545s" podCreationTimestamp="2025-12-02 15:06:29 +0000 UTC" firstStartedPulling="2025-12-02 15:06:31.019455183 +0000 UTC m=+2253.143817387" lastFinishedPulling="2025-12-02 15:06:32.57758599 +0000 UTC m=+2254.701948204" observedRunningTime="2025-12-02 15:06:33.065417859 +0000 UTC m=+2255.189780073" watchObservedRunningTime="2025-12-02 15:06:33.0666545 +0000 UTC m=+2255.191016704" Dec 02 15:06:39 crc kubenswrapper[4814]: I1202 15:06:39.862504 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:39 crc kubenswrapper[4814]: I1202 15:06:39.863186 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:39 crc kubenswrapper[4814]: I1202 15:06:39.907087 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.162824 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.295040 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tb76p"] Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.301424 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tb76p"] Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.374247 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherec58-account-delete-8qtvv"] Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.375617 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherec58-account-delete-8qtvv" Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.392543 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherec58-account-delete-8qtvv"] Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.435999 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.436211 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="482b478e-e7a1-4448-8da2-f1160fdb3e1a" containerName="watcher-kuttl-api-log" containerID="cri-o://85e7db71c9a55a9f94a8031361d87942994d64fd6df0d41c35f4e78c0aed93cc" gracePeriod=30 Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.436597 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="482b478e-e7a1-4448-8da2-f1160fdb3e1a" containerName="watcher-api" containerID="cri-o://91508f1a89b7a65e06b0e237702783c6a1a8a223ea8a5c818feec93a99c34ce2" gracePeriod=30 Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.446471 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl9pb\" (UniqueName: \"kubernetes.io/projected/73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81-kube-api-access-wl9pb\") pod \"watcherec58-account-delete-8qtvv\" (UID: \"73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81\") " pod="watcher-kuttl-default/watcherec58-account-delete-8qtvv" Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.446526 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81-operator-scripts\") pod \"watcherec58-account-delete-8qtvv\" (UID: \"73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81\") " pod="watcher-kuttl-default/watcherec58-account-delete-8qtvv" Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.455776 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.506303 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.506536 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="c153b697-66c1-48ec-ba60-556ca7b6c7cb" containerName="watcher-applier" containerID="cri-o://36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71" gracePeriod=30 Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.548498 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl9pb\" (UniqueName: \"kubernetes.io/projected/73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81-kube-api-access-wl9pb\") pod \"watcherec58-account-delete-8qtvv\" (UID: \"73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81\") " pod="watcher-kuttl-default/watcherec58-account-delete-8qtvv" Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.548553 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81-operator-scripts\") pod \"watcherec58-account-delete-8qtvv\" (UID: \"73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81\") " pod="watcher-kuttl-default/watcherec58-account-delete-8qtvv" Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.549275 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81-operator-scripts\") pod \"watcherec58-account-delete-8qtvv\" (UID: \"73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81\") " pod="watcher-kuttl-default/watcherec58-account-delete-8qtvv" Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.574302 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl9pb\" (UniqueName: \"kubernetes.io/projected/73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81-kube-api-access-wl9pb\") pod \"watcherec58-account-delete-8qtvv\" (UID: \"73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81\") " pod="watcher-kuttl-default/watcherec58-account-delete-8qtvv" Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.709443 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherec58-account-delete-8qtvv" Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.871505 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.955810 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24c70328-5be3-476b-8788-fcd8ba8bee33-logs\") pod \"24c70328-5be3-476b-8788-fcd8ba8bee33\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.955897 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-custom-prometheus-ca\") pod \"24c70328-5be3-476b-8788-fcd8ba8bee33\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.955980 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-combined-ca-bundle\") pod \"24c70328-5be3-476b-8788-fcd8ba8bee33\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.956058 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8pwz\" (UniqueName: \"kubernetes.io/projected/24c70328-5be3-476b-8788-fcd8ba8bee33-kube-api-access-q8pwz\") pod \"24c70328-5be3-476b-8788-fcd8ba8bee33\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.956096 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-config-data\") pod \"24c70328-5be3-476b-8788-fcd8ba8bee33\" (UID: \"24c70328-5be3-476b-8788-fcd8ba8bee33\") " Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.959965 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24c70328-5be3-476b-8788-fcd8ba8bee33-logs" (OuterVolumeSpecName: "logs") pod "24c70328-5be3-476b-8788-fcd8ba8bee33" (UID: "24c70328-5be3-476b-8788-fcd8ba8bee33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.963179 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24c70328-5be3-476b-8788-fcd8ba8bee33-kube-api-access-q8pwz" (OuterVolumeSpecName: "kube-api-access-q8pwz") pod "24c70328-5be3-476b-8788-fcd8ba8bee33" (UID: "24c70328-5be3-476b-8788-fcd8ba8bee33"). InnerVolumeSpecName "kube-api-access-q8pwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.989484 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24c70328-5be3-476b-8788-fcd8ba8bee33" (UID: "24c70328-5be3-476b-8788-fcd8ba8bee33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:06:40 crc kubenswrapper[4814]: I1202 15:06:40.998547 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "24c70328-5be3-476b-8788-fcd8ba8bee33" (UID: "24c70328-5be3-476b-8788-fcd8ba8bee33"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.021376 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-config-data" (OuterVolumeSpecName: "config-data") pod "24c70328-5be3-476b-8788-fcd8ba8bee33" (UID: "24c70328-5be3-476b-8788-fcd8ba8bee33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.063528 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.063559 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8pwz\" (UniqueName: \"kubernetes.io/projected/24c70328-5be3-476b-8788-fcd8ba8bee33-kube-api-access-q8pwz\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.063571 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.063579 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24c70328-5be3-476b-8788-fcd8ba8bee33-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.063588 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/24c70328-5be3-476b-8788-fcd8ba8bee33-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.116962 4814 generic.go:334] "Generic (PLEG): container finished" podID="482b478e-e7a1-4448-8da2-f1160fdb3e1a" containerID="85e7db71c9a55a9f94a8031361d87942994d64fd6df0d41c35f4e78c0aed93cc" exitCode=143 Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.117036 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"482b478e-e7a1-4448-8da2-f1160fdb3e1a","Type":"ContainerDied","Data":"85e7db71c9a55a9f94a8031361d87942994d64fd6df0d41c35f4e78c0aed93cc"} Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.119371 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.119402 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"24c70328-5be3-476b-8788-fcd8ba8bee33","Type":"ContainerDied","Data":"89b6f4b8ca75070e9b3b59c758c01b1fb0481dc3a0ee603c7e24aedb295a0cd1"} Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.119468 4814 scope.go:117] "RemoveContainer" containerID="376a9ab6c2d9674333d887cad80c4852477321a303aeeea27a5b6bd974830465" Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.176526 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.186207 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.247917 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherec58-account-delete-8qtvv"] Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.937101 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.987859 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482b478e-e7a1-4448-8da2-f1160fdb3e1a-logs\") pod \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.987976 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-config-data\") pod \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.987997 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-public-tls-certs\") pod \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.988024 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq24g\" (UniqueName: \"kubernetes.io/projected/482b478e-e7a1-4448-8da2-f1160fdb3e1a-kube-api-access-hq24g\") pod \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.988076 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-combined-ca-bundle\") pod \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.988103 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-custom-prometheus-ca\") pod \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.988148 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-internal-tls-certs\") pod \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\" (UID: \"482b478e-e7a1-4448-8da2-f1160fdb3e1a\") " Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.989192 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/482b478e-e7a1-4448-8da2-f1160fdb3e1a-logs" (OuterVolumeSpecName: "logs") pod "482b478e-e7a1-4448-8da2-f1160fdb3e1a" (UID: "482b478e-e7a1-4448-8da2-f1160fdb3e1a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:06:41 crc kubenswrapper[4814]: I1202 15:06:41.997012 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/482b478e-e7a1-4448-8da2-f1160fdb3e1a-kube-api-access-hq24g" (OuterVolumeSpecName: "kube-api-access-hq24g") pod "482b478e-e7a1-4448-8da2-f1160fdb3e1a" (UID: "482b478e-e7a1-4448-8da2-f1160fdb3e1a"). InnerVolumeSpecName "kube-api-access-hq24g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.021662 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "482b478e-e7a1-4448-8da2-f1160fdb3e1a" (UID: "482b478e-e7a1-4448-8da2-f1160fdb3e1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.035927 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "482b478e-e7a1-4448-8da2-f1160fdb3e1a" (UID: "482b478e-e7a1-4448-8da2-f1160fdb3e1a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.041395 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-config-data" (OuterVolumeSpecName: "config-data") pod "482b478e-e7a1-4448-8da2-f1160fdb3e1a" (UID: "482b478e-e7a1-4448-8da2-f1160fdb3e1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.048499 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "482b478e-e7a1-4448-8da2-f1160fdb3e1a" (UID: "482b478e-e7a1-4448-8da2-f1160fdb3e1a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.050265 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "482b478e-e7a1-4448-8da2-f1160fdb3e1a" (UID: "482b478e-e7a1-4448-8da2-f1160fdb3e1a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:06:42 crc kubenswrapper[4814]: E1202 15:06:42.068859 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 15:06:42 crc kubenswrapper[4814]: E1202 15:06:42.069880 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 15:06:42 crc kubenswrapper[4814]: E1202 15:06:42.070946 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 15:06:42 crc kubenswrapper[4814]: E1202 15:06:42.070977 4814 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="c153b697-66c1-48ec-ba60-556ca7b6c7cb" containerName="watcher-applier" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.089588 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.089621 4814 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.089632 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq24g\" (UniqueName: \"kubernetes.io/projected/482b478e-e7a1-4448-8da2-f1160fdb3e1a-kube-api-access-hq24g\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.089642 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.089650 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.089659 4814 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/482b478e-e7a1-4448-8da2-f1160fdb3e1a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.089667 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482b478e-e7a1-4448-8da2-f1160fdb3e1a-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.126388 4814 generic.go:334] "Generic (PLEG): container finished" podID="482b478e-e7a1-4448-8da2-f1160fdb3e1a" containerID="91508f1a89b7a65e06b0e237702783c6a1a8a223ea8a5c818feec93a99c34ce2" exitCode=0 Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.126495 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.126484 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"482b478e-e7a1-4448-8da2-f1160fdb3e1a","Type":"ContainerDied","Data":"91508f1a89b7a65e06b0e237702783c6a1a8a223ea8a5c818feec93a99c34ce2"} Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.126637 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"482b478e-e7a1-4448-8da2-f1160fdb3e1a","Type":"ContainerDied","Data":"94b32c1b75a35bf21a0ce62b643a7f69914bf763063c13a6da0e240a2e9a73c5"} Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.126679 4814 scope.go:117] "RemoveContainer" containerID="91508f1a89b7a65e06b0e237702783c6a1a8a223ea8a5c818feec93a99c34ce2" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.129649 4814 generic.go:334] "Generic (PLEG): container finished" podID="73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81" containerID="8883032e1c198be0705ad29f5b20e96d3607871250a36f31964bc72b16a3263a" exitCode=0 Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.129682 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherec58-account-delete-8qtvv" event={"ID":"73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81","Type":"ContainerDied","Data":"8883032e1c198be0705ad29f5b20e96d3607871250a36f31964bc72b16a3263a"} Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.129705 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherec58-account-delete-8qtvv" event={"ID":"73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81","Type":"ContainerStarted","Data":"e466d0eae5cd8425df6e7dd758a63642f3df5a1c97d8e1f71cc06383151b17ce"} Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.154000 4814 scope.go:117] "RemoveContainer" containerID="85e7db71c9a55a9f94a8031361d87942994d64fd6df0d41c35f4e78c0aed93cc" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.185671 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" path="/var/lib/kubelet/pods/24c70328-5be3-476b-8788-fcd8ba8bee33/volumes" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.187004 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816" path="/var/lib/kubelet/pods/9ca7d9ef-9e6c-40d8-8ff9-3ea916d02816/volumes" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.204292 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.230052 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.247384 4814 scope.go:117] "RemoveContainer" containerID="91508f1a89b7a65e06b0e237702783c6a1a8a223ea8a5c818feec93a99c34ce2" Dec 02 15:06:42 crc kubenswrapper[4814]: E1202 15:06:42.252619 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91508f1a89b7a65e06b0e237702783c6a1a8a223ea8a5c818feec93a99c34ce2\": container with ID starting with 91508f1a89b7a65e06b0e237702783c6a1a8a223ea8a5c818feec93a99c34ce2 not found: ID does not exist" containerID="91508f1a89b7a65e06b0e237702783c6a1a8a223ea8a5c818feec93a99c34ce2" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.252670 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91508f1a89b7a65e06b0e237702783c6a1a8a223ea8a5c818feec93a99c34ce2"} err="failed to get container status \"91508f1a89b7a65e06b0e237702783c6a1a8a223ea8a5c818feec93a99c34ce2\": rpc error: code = NotFound desc = could not find container \"91508f1a89b7a65e06b0e237702783c6a1a8a223ea8a5c818feec93a99c34ce2\": container with ID starting with 91508f1a89b7a65e06b0e237702783c6a1a8a223ea8a5c818feec93a99c34ce2 not found: ID does not exist" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.252696 4814 scope.go:117] "RemoveContainer" containerID="85e7db71c9a55a9f94a8031361d87942994d64fd6df0d41c35f4e78c0aed93cc" Dec 02 15:06:42 crc kubenswrapper[4814]: E1202 15:06:42.258436 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85e7db71c9a55a9f94a8031361d87942994d64fd6df0d41c35f4e78c0aed93cc\": container with ID starting with 85e7db71c9a55a9f94a8031361d87942994d64fd6df0d41c35f4e78c0aed93cc not found: ID does not exist" containerID="85e7db71c9a55a9f94a8031361d87942994d64fd6df0d41c35f4e78c0aed93cc" Dec 02 15:06:42 crc kubenswrapper[4814]: I1202 15:06:42.258488 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85e7db71c9a55a9f94a8031361d87942994d64fd6df0d41c35f4e78c0aed93cc"} err="failed to get container status \"85e7db71c9a55a9f94a8031361d87942994d64fd6df0d41c35f4e78c0aed93cc\": rpc error: code = NotFound desc = could not find container \"85e7db71c9a55a9f94a8031361d87942994d64fd6df0d41c35f4e78c0aed93cc\": container with ID starting with 85e7db71c9a55a9f94a8031361d87942994d64fd6df0d41c35f4e78c0aed93cc not found: ID does not exist" Dec 02 15:06:43 crc kubenswrapper[4814]: I1202 15:06:43.504169 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9blqc"] Dec 02 15:06:43 crc kubenswrapper[4814]: I1202 15:06:43.505258 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9blqc" podUID="09d4fd83-ac02-42af-8ada-49cd9e87fd9f" containerName="registry-server" containerID="cri-o://2e1c3c72eeba162226952afeed2fc473ebf6af9f2ce82a076d1b20a2af7ab47e" gracePeriod=2 Dec 02 15:06:43 crc kubenswrapper[4814]: I1202 15:06:43.531542 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherec58-account-delete-8qtvv" Dec 02 15:06:43 crc kubenswrapper[4814]: I1202 15:06:43.623656 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl9pb\" (UniqueName: \"kubernetes.io/projected/73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81-kube-api-access-wl9pb\") pod \"73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81\" (UID: \"73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81\") " Dec 02 15:06:43 crc kubenswrapper[4814]: I1202 15:06:43.623798 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81-operator-scripts\") pod \"73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81\" (UID: \"73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81\") " Dec 02 15:06:43 crc kubenswrapper[4814]: I1202 15:06:43.624754 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81" (UID: "73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:06:43 crc kubenswrapper[4814]: I1202 15:06:43.631530 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81-kube-api-access-wl9pb" (OuterVolumeSpecName: "kube-api-access-wl9pb") pod "73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81" (UID: "73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81"). InnerVolumeSpecName "kube-api-access-wl9pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:06:43 crc kubenswrapper[4814]: I1202 15:06:43.725903 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:43 crc kubenswrapper[4814]: I1202 15:06:43.725936 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl9pb\" (UniqueName: \"kubernetes.io/projected/73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81-kube-api-access-wl9pb\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:43 crc kubenswrapper[4814]: I1202 15:06:43.927876 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.029147 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-utilities\") pod \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\" (UID: \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\") " Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.029244 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4hsz\" (UniqueName: \"kubernetes.io/projected/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-kube-api-access-b4hsz\") pod \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\" (UID: \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\") " Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.029279 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-catalog-content\") pod \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\" (UID: \"09d4fd83-ac02-42af-8ada-49cd9e87fd9f\") " Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.031157 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-utilities" (OuterVolumeSpecName: "utilities") pod "09d4fd83-ac02-42af-8ada-49cd9e87fd9f" (UID: "09d4fd83-ac02-42af-8ada-49cd9e87fd9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.040133 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-kube-api-access-b4hsz" (OuterVolumeSpecName: "kube-api-access-b4hsz") pod "09d4fd83-ac02-42af-8ada-49cd9e87fd9f" (UID: "09d4fd83-ac02-42af-8ada-49cd9e87fd9f"). InnerVolumeSpecName "kube-api-access-b4hsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.059442 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09d4fd83-ac02-42af-8ada-49cd9e87fd9f" (UID: "09d4fd83-ac02-42af-8ada-49cd9e87fd9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.131851 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.131915 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4hsz\" (UniqueName: \"kubernetes.io/projected/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-kube-api-access-b4hsz\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.131930 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d4fd83-ac02-42af-8ada-49cd9e87fd9f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.155301 4814 generic.go:334] "Generic (PLEG): container finished" podID="09d4fd83-ac02-42af-8ada-49cd9e87fd9f" containerID="2e1c3c72eeba162226952afeed2fc473ebf6af9f2ce82a076d1b20a2af7ab47e" exitCode=0 Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.155433 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9blqc" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.155928 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="482b478e-e7a1-4448-8da2-f1160fdb3e1a" path="/var/lib/kubelet/pods/482b478e-e7a1-4448-8da2-f1160fdb3e1a/volumes" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.157042 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherec58-account-delete-8qtvv" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.157959 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9blqc" event={"ID":"09d4fd83-ac02-42af-8ada-49cd9e87fd9f","Type":"ContainerDied","Data":"2e1c3c72eeba162226952afeed2fc473ebf6af9f2ce82a076d1b20a2af7ab47e"} Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.158013 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9blqc" event={"ID":"09d4fd83-ac02-42af-8ada-49cd9e87fd9f","Type":"ContainerDied","Data":"72b39fa58e83d92f7ae9f76db880540b258db16914d0b44a7a7aa1d72205e216"} Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.158038 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherec58-account-delete-8qtvv" event={"ID":"73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81","Type":"ContainerDied","Data":"e466d0eae5cd8425df6e7dd758a63642f3df5a1c97d8e1f71cc06383151b17ce"} Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.158059 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e466d0eae5cd8425df6e7dd758a63642f3df5a1c97d8e1f71cc06383151b17ce" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.158086 4814 scope.go:117] "RemoveContainer" containerID="2e1c3c72eeba162226952afeed2fc473ebf6af9f2ce82a076d1b20a2af7ab47e" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.185721 4814 scope.go:117] "RemoveContainer" containerID="1aadf9df14b6b5e5f150cebe21b40941f3a8183bf202e91a15c7029ea3cb7735" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.201522 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9blqc"] Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.210520 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9blqc"] Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.216721 4814 scope.go:117] "RemoveContainer" containerID="8e52980b5cfc4a6ce7cc59efecb9c088bae2c8b7f9d51fe9224cd2f8e91090de" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.241513 4814 scope.go:117] "RemoveContainer" containerID="2e1c3c72eeba162226952afeed2fc473ebf6af9f2ce82a076d1b20a2af7ab47e" Dec 02 15:06:44 crc kubenswrapper[4814]: E1202 15:06:44.241899 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e1c3c72eeba162226952afeed2fc473ebf6af9f2ce82a076d1b20a2af7ab47e\": container with ID starting with 2e1c3c72eeba162226952afeed2fc473ebf6af9f2ce82a076d1b20a2af7ab47e not found: ID does not exist" containerID="2e1c3c72eeba162226952afeed2fc473ebf6af9f2ce82a076d1b20a2af7ab47e" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.241962 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1c3c72eeba162226952afeed2fc473ebf6af9f2ce82a076d1b20a2af7ab47e"} err="failed to get container status \"2e1c3c72eeba162226952afeed2fc473ebf6af9f2ce82a076d1b20a2af7ab47e\": rpc error: code = NotFound desc = could not find container \"2e1c3c72eeba162226952afeed2fc473ebf6af9f2ce82a076d1b20a2af7ab47e\": container with ID starting with 2e1c3c72eeba162226952afeed2fc473ebf6af9f2ce82a076d1b20a2af7ab47e not found: ID does not exist" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.241999 4814 scope.go:117] "RemoveContainer" containerID="1aadf9df14b6b5e5f150cebe21b40941f3a8183bf202e91a15c7029ea3cb7735" Dec 02 15:06:44 crc kubenswrapper[4814]: E1202 15:06:44.242435 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aadf9df14b6b5e5f150cebe21b40941f3a8183bf202e91a15c7029ea3cb7735\": container with ID starting with 1aadf9df14b6b5e5f150cebe21b40941f3a8183bf202e91a15c7029ea3cb7735 not found: ID does not exist" containerID="1aadf9df14b6b5e5f150cebe21b40941f3a8183bf202e91a15c7029ea3cb7735" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.242464 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aadf9df14b6b5e5f150cebe21b40941f3a8183bf202e91a15c7029ea3cb7735"} err="failed to get container status \"1aadf9df14b6b5e5f150cebe21b40941f3a8183bf202e91a15c7029ea3cb7735\": rpc error: code = NotFound desc = could not find container \"1aadf9df14b6b5e5f150cebe21b40941f3a8183bf202e91a15c7029ea3cb7735\": container with ID starting with 1aadf9df14b6b5e5f150cebe21b40941f3a8183bf202e91a15c7029ea3cb7735 not found: ID does not exist" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.242482 4814 scope.go:117] "RemoveContainer" containerID="8e52980b5cfc4a6ce7cc59efecb9c088bae2c8b7f9d51fe9224cd2f8e91090de" Dec 02 15:06:44 crc kubenswrapper[4814]: E1202 15:06:44.242912 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e52980b5cfc4a6ce7cc59efecb9c088bae2c8b7f9d51fe9224cd2f8e91090de\": container with ID starting with 8e52980b5cfc4a6ce7cc59efecb9c088bae2c8b7f9d51fe9224cd2f8e91090de not found: ID does not exist" containerID="8e52980b5cfc4a6ce7cc59efecb9c088bae2c8b7f9d51fe9224cd2f8e91090de" Dec 02 15:06:44 crc kubenswrapper[4814]: I1202 15:06:44.242931 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e52980b5cfc4a6ce7cc59efecb9c088bae2c8b7f9d51fe9224cd2f8e91090de"} err="failed to get container status \"8e52980b5cfc4a6ce7cc59efecb9c088bae2c8b7f9d51fe9224cd2f8e91090de\": rpc error: code = NotFound desc = could not find container \"8e52980b5cfc4a6ce7cc59efecb9c088bae2c8b7f9d51fe9224cd2f8e91090de\": container with ID starting with 8e52980b5cfc4a6ce7cc59efecb9c088bae2c8b7f9d51fe9224cd2f8e91090de not found: ID does not exist" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.401769 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-zppnr"] Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.410869 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-zppnr"] Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.418436 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherec58-account-delete-8qtvv"] Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.426733 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-ec58-account-create-update-kwgds"] Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.432559 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherec58-account-delete-8qtvv"] Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.438059 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-ec58-account-create-update-kwgds"] Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.611725 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-1187-account-create-update-dhb89"] Dec 02 15:06:45 crc kubenswrapper[4814]: E1202 15:06:45.612297 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612312 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: E1202 15:06:45.612321 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d4fd83-ac02-42af-8ada-49cd9e87fd9f" containerName="registry-server" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612327 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d4fd83-ac02-42af-8ada-49cd9e87fd9f" containerName="registry-server" Dec 02 15:06:45 crc kubenswrapper[4814]: E1202 15:06:45.612334 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612342 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: E1202 15:06:45.612352 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d4fd83-ac02-42af-8ada-49cd9e87fd9f" containerName="extract-content" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612357 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d4fd83-ac02-42af-8ada-49cd9e87fd9f" containerName="extract-content" Dec 02 15:06:45 crc kubenswrapper[4814]: E1202 15:06:45.612371 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="482b478e-e7a1-4448-8da2-f1160fdb3e1a" containerName="watcher-api" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612377 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="482b478e-e7a1-4448-8da2-f1160fdb3e1a" containerName="watcher-api" Dec 02 15:06:45 crc kubenswrapper[4814]: E1202 15:06:45.612388 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81" containerName="mariadb-account-delete" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612393 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81" containerName="mariadb-account-delete" Dec 02 15:06:45 crc kubenswrapper[4814]: E1202 15:06:45.612406 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612412 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: E1202 15:06:45.612418 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="482b478e-e7a1-4448-8da2-f1160fdb3e1a" containerName="watcher-kuttl-api-log" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612425 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="482b478e-e7a1-4448-8da2-f1160fdb3e1a" containerName="watcher-kuttl-api-log" Dec 02 15:06:45 crc kubenswrapper[4814]: E1202 15:06:45.612444 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d4fd83-ac02-42af-8ada-49cd9e87fd9f" containerName="extract-utilities" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612450 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d4fd83-ac02-42af-8ada-49cd9e87fd9f" containerName="extract-utilities" Dec 02 15:06:45 crc kubenswrapper[4814]: E1202 15:06:45.612461 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612466 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: E1202 15:06:45.612473 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612479 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612648 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="482b478e-e7a1-4448-8da2-f1160fdb3e1a" containerName="watcher-kuttl-api-log" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612663 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="482b478e-e7a1-4448-8da2-f1160fdb3e1a" containerName="watcher-api" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612683 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81" containerName="mariadb-account-delete" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612702 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612710 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d4fd83-ac02-42af-8ada-49cd9e87fd9f" containerName="registry-server" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612718 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612726 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.612737 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.613257 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1187-account-create-update-dhb89" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.615847 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.625815 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-vjj4t"] Dec 02 15:06:45 crc kubenswrapper[4814]: E1202 15:06:45.626333 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.626351 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.626522 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.627125 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-vjj4t" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.647321 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-1187-account-create-update-dhb89"] Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.656659 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a2beaca-a78b-4a47-a670-87001e8cb226-operator-scripts\") pod \"watcher-1187-account-create-update-dhb89\" (UID: \"2a2beaca-a78b-4a47-a670-87001e8cb226\") " pod="watcher-kuttl-default/watcher-1187-account-create-update-dhb89" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.656753 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc2mz\" (UniqueName: \"kubernetes.io/projected/2a2beaca-a78b-4a47-a670-87001e8cb226-kube-api-access-bc2mz\") pod \"watcher-1187-account-create-update-dhb89\" (UID: \"2a2beaca-a78b-4a47-a670-87001e8cb226\") " pod="watcher-kuttl-default/watcher-1187-account-create-update-dhb89" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.656804 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2099001a-40b2-4a49-a97d-04319d64a597-operator-scripts\") pod \"watcher-db-create-vjj4t\" (UID: \"2099001a-40b2-4a49-a97d-04319d64a597\") " pod="watcher-kuttl-default/watcher-db-create-vjj4t" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.656826 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqjsm\" (UniqueName: \"kubernetes.io/projected/2099001a-40b2-4a49-a97d-04319d64a597-kube-api-access-xqjsm\") pod \"watcher-db-create-vjj4t\" (UID: \"2099001a-40b2-4a49-a97d-04319d64a597\") " pod="watcher-kuttl-default/watcher-db-create-vjj4t" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.665471 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-vjj4t"] Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.757997 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc2mz\" (UniqueName: \"kubernetes.io/projected/2a2beaca-a78b-4a47-a670-87001e8cb226-kube-api-access-bc2mz\") pod \"watcher-1187-account-create-update-dhb89\" (UID: \"2a2beaca-a78b-4a47-a670-87001e8cb226\") " pod="watcher-kuttl-default/watcher-1187-account-create-update-dhb89" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.758081 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2099001a-40b2-4a49-a97d-04319d64a597-operator-scripts\") pod \"watcher-db-create-vjj4t\" (UID: \"2099001a-40b2-4a49-a97d-04319d64a597\") " pod="watcher-kuttl-default/watcher-db-create-vjj4t" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.758104 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqjsm\" (UniqueName: \"kubernetes.io/projected/2099001a-40b2-4a49-a97d-04319d64a597-kube-api-access-xqjsm\") pod \"watcher-db-create-vjj4t\" (UID: \"2099001a-40b2-4a49-a97d-04319d64a597\") " pod="watcher-kuttl-default/watcher-db-create-vjj4t" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.758161 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a2beaca-a78b-4a47-a670-87001e8cb226-operator-scripts\") pod \"watcher-1187-account-create-update-dhb89\" (UID: \"2a2beaca-a78b-4a47-a670-87001e8cb226\") " pod="watcher-kuttl-default/watcher-1187-account-create-update-dhb89" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.759471 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2099001a-40b2-4a49-a97d-04319d64a597-operator-scripts\") pod \"watcher-db-create-vjj4t\" (UID: \"2099001a-40b2-4a49-a97d-04319d64a597\") " pod="watcher-kuttl-default/watcher-db-create-vjj4t" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.759551 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a2beaca-a78b-4a47-a670-87001e8cb226-operator-scripts\") pod \"watcher-1187-account-create-update-dhb89\" (UID: \"2a2beaca-a78b-4a47-a670-87001e8cb226\") " pod="watcher-kuttl-default/watcher-1187-account-create-update-dhb89" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.777728 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc2mz\" (UniqueName: \"kubernetes.io/projected/2a2beaca-a78b-4a47-a670-87001e8cb226-kube-api-access-bc2mz\") pod \"watcher-1187-account-create-update-dhb89\" (UID: \"2a2beaca-a78b-4a47-a670-87001e8cb226\") " pod="watcher-kuttl-default/watcher-1187-account-create-update-dhb89" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.778503 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqjsm\" (UniqueName: \"kubernetes.io/projected/2099001a-40b2-4a49-a97d-04319d64a597-kube-api-access-xqjsm\") pod \"watcher-db-create-vjj4t\" (UID: \"2099001a-40b2-4a49-a97d-04319d64a597\") " pod="watcher-kuttl-default/watcher-db-create-vjj4t" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.820845 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.859813 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx87q\" (UniqueName: \"kubernetes.io/projected/c153b697-66c1-48ec-ba60-556ca7b6c7cb-kube-api-access-xx87q\") pod \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.859982 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c153b697-66c1-48ec-ba60-556ca7b6c7cb-logs\") pod \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.860027 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c153b697-66c1-48ec-ba60-556ca7b6c7cb-config-data\") pod \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.860058 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c153b697-66c1-48ec-ba60-556ca7b6c7cb-combined-ca-bundle\") pod \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\" (UID: \"c153b697-66c1-48ec-ba60-556ca7b6c7cb\") " Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.860938 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c153b697-66c1-48ec-ba60-556ca7b6c7cb-logs" (OuterVolumeSpecName: "logs") pod "c153b697-66c1-48ec-ba60-556ca7b6c7cb" (UID: "c153b697-66c1-48ec-ba60-556ca7b6c7cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.864426 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c153b697-66c1-48ec-ba60-556ca7b6c7cb-kube-api-access-xx87q" (OuterVolumeSpecName: "kube-api-access-xx87q") pod "c153b697-66c1-48ec-ba60-556ca7b6c7cb" (UID: "c153b697-66c1-48ec-ba60-556ca7b6c7cb"). InnerVolumeSpecName "kube-api-access-xx87q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.880136 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c153b697-66c1-48ec-ba60-556ca7b6c7cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c153b697-66c1-48ec-ba60-556ca7b6c7cb" (UID: "c153b697-66c1-48ec-ba60-556ca7b6c7cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.942557 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1187-account-create-update-dhb89" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.963507 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c153b697-66c1-48ec-ba60-556ca7b6c7cb-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.963550 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c153b697-66c1-48ec-ba60-556ca7b6c7cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.963564 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx87q\" (UniqueName: \"kubernetes.io/projected/c153b697-66c1-48ec-ba60-556ca7b6c7cb-kube-api-access-xx87q\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.967093 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-vjj4t" Dec 02 15:06:45 crc kubenswrapper[4814]: I1202 15:06:45.970255 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c153b697-66c1-48ec-ba60-556ca7b6c7cb-config-data" (OuterVolumeSpecName: "config-data") pod "c153b697-66c1-48ec-ba60-556ca7b6c7cb" (UID: "c153b697-66c1-48ec-ba60-556ca7b6c7cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.065314 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c153b697-66c1-48ec-ba60-556ca7b6c7cb-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.156683 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d4fd83-ac02-42af-8ada-49cd9e87fd9f" path="/var/lib/kubelet/pods/09d4fd83-ac02-42af-8ada-49cd9e87fd9f/volumes" Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.157609 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81" path="/var/lib/kubelet/pods/73a6dcf7-f87f-4b4e-ae46-151bcc0bcc81/volumes" Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.158062 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b81079bf-94b8-4866-b55d-9fe3a65fcee3" path="/var/lib/kubelet/pods/b81079bf-94b8-4866-b55d-9fe3a65fcee3/volumes" Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.159249 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eae13af5-1d8e-4289-85d8-8032e9e89bb5" path="/var/lib/kubelet/pods/eae13af5-1d8e-4289-85d8-8032e9e89bb5/volumes" Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.179888 4814 generic.go:334] "Generic (PLEG): container finished" podID="c153b697-66c1-48ec-ba60-556ca7b6c7cb" containerID="36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71" exitCode=0 Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.179930 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"c153b697-66c1-48ec-ba60-556ca7b6c7cb","Type":"ContainerDied","Data":"36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71"} Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.179955 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"c153b697-66c1-48ec-ba60-556ca7b6c7cb","Type":"ContainerDied","Data":"461af1e184a4999997752b13a17f1cd4f5055724de633033fe6aa441e7270fcf"} Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.179972 4814 scope.go:117] "RemoveContainer" containerID="36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71" Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.180072 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.217611 4814 scope.go:117] "RemoveContainer" containerID="36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71" Dec 02 15:06:46 crc kubenswrapper[4814]: E1202 15:06:46.219154 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71\": container with ID starting with 36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71 not found: ID does not exist" containerID="36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71" Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.219195 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71"} err="failed to get container status \"36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71\": rpc error: code = NotFound desc = could not find container \"36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71\": container with ID starting with 36306c28b441be17d477a0477235d3f36993bf0696e6574561c52240f8488a71 not found: ID does not exist" Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.226806 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.236844 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.482731 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-1187-account-create-update-dhb89"] Dec 02 15:06:46 crc kubenswrapper[4814]: I1202 15:06:46.556044 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-vjj4t"] Dec 02 15:06:46 crc kubenswrapper[4814]: W1202 15:06:46.565033 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2099001a_40b2_4a49_a97d_04319d64a597.slice/crio-c627586d1b8bd4d86085a59c0da37f37bd6c6296eb0756d0824ca23c1d138b57 WatchSource:0}: Error finding container c627586d1b8bd4d86085a59c0da37f37bd6c6296eb0756d0824ca23c1d138b57: Status 404 returned error can't find the container with id c627586d1b8bd4d86085a59c0da37f37bd6c6296eb0756d0824ca23c1d138b57 Dec 02 15:06:47 crc kubenswrapper[4814]: I1202 15:06:47.189807 4814 generic.go:334] "Generic (PLEG): container finished" podID="2a2beaca-a78b-4a47-a670-87001e8cb226" containerID="179d76f0318f6e9b7d171d695cb1f7201367b3c945e142da28a699300d9bed5f" exitCode=0 Dec 02 15:06:47 crc kubenswrapper[4814]: I1202 15:06:47.189860 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-1187-account-create-update-dhb89" event={"ID":"2a2beaca-a78b-4a47-a670-87001e8cb226","Type":"ContainerDied","Data":"179d76f0318f6e9b7d171d695cb1f7201367b3c945e142da28a699300d9bed5f"} Dec 02 15:06:47 crc kubenswrapper[4814]: I1202 15:06:47.190075 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-1187-account-create-update-dhb89" event={"ID":"2a2beaca-a78b-4a47-a670-87001e8cb226","Type":"ContainerStarted","Data":"b58596a37dcd29fc4ddc40574f4acbbee909350d158207cebeb3cc06c3a13526"} Dec 02 15:06:47 crc kubenswrapper[4814]: I1202 15:06:47.192058 4814 generic.go:334] "Generic (PLEG): container finished" podID="2099001a-40b2-4a49-a97d-04319d64a597" containerID="95eaf2f95f217a0235113536f00d791068bb54f296fb544d831a6ca3963e9465" exitCode=0 Dec 02 15:06:47 crc kubenswrapper[4814]: I1202 15:06:47.192082 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-vjj4t" event={"ID":"2099001a-40b2-4a49-a97d-04319d64a597","Type":"ContainerDied","Data":"95eaf2f95f217a0235113536f00d791068bb54f296fb544d831a6ca3963e9465"} Dec 02 15:06:47 crc kubenswrapper[4814]: I1202 15:06:47.192108 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-vjj4t" event={"ID":"2099001a-40b2-4a49-a97d-04319d64a597","Type":"ContainerStarted","Data":"c627586d1b8bd4d86085a59c0da37f37bd6c6296eb0756d0824ca23c1d138b57"} Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.155826 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c153b697-66c1-48ec-ba60-556ca7b6c7cb" path="/var/lib/kubelet/pods/c153b697-66c1-48ec-ba60-556ca7b6c7cb/volumes" Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.690596 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-vjj4t" Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.707991 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1187-account-create-update-dhb89" Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.822484 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc2mz\" (UniqueName: \"kubernetes.io/projected/2a2beaca-a78b-4a47-a670-87001e8cb226-kube-api-access-bc2mz\") pod \"2a2beaca-a78b-4a47-a670-87001e8cb226\" (UID: \"2a2beaca-a78b-4a47-a670-87001e8cb226\") " Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.822648 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqjsm\" (UniqueName: \"kubernetes.io/projected/2099001a-40b2-4a49-a97d-04319d64a597-kube-api-access-xqjsm\") pod \"2099001a-40b2-4a49-a97d-04319d64a597\" (UID: \"2099001a-40b2-4a49-a97d-04319d64a597\") " Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.822733 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2099001a-40b2-4a49-a97d-04319d64a597-operator-scripts\") pod \"2099001a-40b2-4a49-a97d-04319d64a597\" (UID: \"2099001a-40b2-4a49-a97d-04319d64a597\") " Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.822763 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a2beaca-a78b-4a47-a670-87001e8cb226-operator-scripts\") pod \"2a2beaca-a78b-4a47-a670-87001e8cb226\" (UID: \"2a2beaca-a78b-4a47-a670-87001e8cb226\") " Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.823570 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a2beaca-a78b-4a47-a670-87001e8cb226-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a2beaca-a78b-4a47-a670-87001e8cb226" (UID: "2a2beaca-a78b-4a47-a670-87001e8cb226"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.823990 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2099001a-40b2-4a49-a97d-04319d64a597-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2099001a-40b2-4a49-a97d-04319d64a597" (UID: "2099001a-40b2-4a49-a97d-04319d64a597"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.828661 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2099001a-40b2-4a49-a97d-04319d64a597-kube-api-access-xqjsm" (OuterVolumeSpecName: "kube-api-access-xqjsm") pod "2099001a-40b2-4a49-a97d-04319d64a597" (UID: "2099001a-40b2-4a49-a97d-04319d64a597"). InnerVolumeSpecName "kube-api-access-xqjsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.830641 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a2beaca-a78b-4a47-a670-87001e8cb226-kube-api-access-bc2mz" (OuterVolumeSpecName: "kube-api-access-bc2mz") pod "2a2beaca-a78b-4a47-a670-87001e8cb226" (UID: "2a2beaca-a78b-4a47-a670-87001e8cb226"). InnerVolumeSpecName "kube-api-access-bc2mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.924881 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqjsm\" (UniqueName: \"kubernetes.io/projected/2099001a-40b2-4a49-a97d-04319d64a597-kube-api-access-xqjsm\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.924912 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2099001a-40b2-4a49-a97d-04319d64a597-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.924920 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a2beaca-a78b-4a47-a670-87001e8cb226-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:48 crc kubenswrapper[4814]: I1202 15:06:48.924932 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc2mz\" (UniqueName: \"kubernetes.io/projected/2a2beaca-a78b-4a47-a670-87001e8cb226-kube-api-access-bc2mz\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:49 crc kubenswrapper[4814]: I1202 15:06:49.223332 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1187-account-create-update-dhb89" Dec 02 15:06:49 crc kubenswrapper[4814]: I1202 15:06:49.223350 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-1187-account-create-update-dhb89" event={"ID":"2a2beaca-a78b-4a47-a670-87001e8cb226","Type":"ContainerDied","Data":"b58596a37dcd29fc4ddc40574f4acbbee909350d158207cebeb3cc06c3a13526"} Dec 02 15:06:49 crc kubenswrapper[4814]: I1202 15:06:49.224410 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b58596a37dcd29fc4ddc40574f4acbbee909350d158207cebeb3cc06c3a13526" Dec 02 15:06:49 crc kubenswrapper[4814]: I1202 15:06:49.236247 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-vjj4t" event={"ID":"2099001a-40b2-4a49-a97d-04319d64a597","Type":"ContainerDied","Data":"c627586d1b8bd4d86085a59c0da37f37bd6c6296eb0756d0824ca23c1d138b57"} Dec 02 15:06:49 crc kubenswrapper[4814]: I1202 15:06:49.236280 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c627586d1b8bd4d86085a59c0da37f37bd6c6296eb0756d0824ca23c1d138b57" Dec 02 15:06:49 crc kubenswrapper[4814]: I1202 15:06:49.236332 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-vjj4t" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.651508 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.651573 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.951131 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-phq62"] Dec 02 15:06:50 crc kubenswrapper[4814]: E1202 15:06:50.951772 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c153b697-66c1-48ec-ba60-556ca7b6c7cb" containerName="watcher-applier" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.951787 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="c153b697-66c1-48ec-ba60-556ca7b6c7cb" containerName="watcher-applier" Dec 02 15:06:50 crc kubenswrapper[4814]: E1202 15:06:50.951809 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a2beaca-a78b-4a47-a670-87001e8cb226" containerName="mariadb-account-create-update" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.951816 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a2beaca-a78b-4a47-a670-87001e8cb226" containerName="mariadb-account-create-update" Dec 02 15:06:50 crc kubenswrapper[4814]: E1202 15:06:50.951840 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2099001a-40b2-4a49-a97d-04319d64a597" containerName="mariadb-database-create" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.951848 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2099001a-40b2-4a49-a97d-04319d64a597" containerName="mariadb-database-create" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.951998 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="2099001a-40b2-4a49-a97d-04319d64a597" containerName="mariadb-database-create" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.952014 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="c153b697-66c1-48ec-ba60-556ca7b6c7cb" containerName="watcher-applier" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.952039 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c70328-5be3-476b-8788-fcd8ba8bee33" containerName="watcher-decision-engine" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.952051 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a2beaca-a78b-4a47-a670-87001e8cb226" containerName="mariadb-account-create-update" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.952706 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.954322 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-phq62\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.954424 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-config-data\") pod \"watcher-kuttl-db-sync-phq62\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.954490 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-db-sync-config-data\") pod \"watcher-kuttl-db-sync-phq62\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.954628 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-822wj\" (UniqueName: \"kubernetes.io/projected/d093e349-97d2-49a3-8767-3c6a694ecbc9-kube-api-access-822wj\") pod \"watcher-kuttl-db-sync-phq62\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.957480 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.958301 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-sm6v5" Dec 02 15:06:50 crc kubenswrapper[4814]: I1202 15:06:50.961355 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-phq62"] Dec 02 15:06:51 crc kubenswrapper[4814]: I1202 15:06:51.055667 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-822wj\" (UniqueName: \"kubernetes.io/projected/d093e349-97d2-49a3-8767-3c6a694ecbc9-kube-api-access-822wj\") pod \"watcher-kuttl-db-sync-phq62\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:51 crc kubenswrapper[4814]: I1202 15:06:51.055727 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-phq62\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:51 crc kubenswrapper[4814]: I1202 15:06:51.055754 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-config-data\") pod \"watcher-kuttl-db-sync-phq62\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:51 crc kubenswrapper[4814]: I1202 15:06:51.055783 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-db-sync-config-data\") pod \"watcher-kuttl-db-sync-phq62\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:51 crc kubenswrapper[4814]: I1202 15:06:51.060517 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-db-sync-config-data\") pod \"watcher-kuttl-db-sync-phq62\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:51 crc kubenswrapper[4814]: I1202 15:06:51.060780 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-phq62\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:51 crc kubenswrapper[4814]: I1202 15:06:51.062321 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-config-data\") pod \"watcher-kuttl-db-sync-phq62\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:51 crc kubenswrapper[4814]: I1202 15:06:51.081423 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-822wj\" (UniqueName: \"kubernetes.io/projected/d093e349-97d2-49a3-8767-3c6a694ecbc9-kube-api-access-822wj\") pod \"watcher-kuttl-db-sync-phq62\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:51 crc kubenswrapper[4814]: I1202 15:06:51.280034 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:51 crc kubenswrapper[4814]: I1202 15:06:51.799279 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-phq62"] Dec 02 15:06:52 crc kubenswrapper[4814]: I1202 15:06:52.261520 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" event={"ID":"d093e349-97d2-49a3-8767-3c6a694ecbc9","Type":"ContainerStarted","Data":"0f2f54ca6ae8ca58f2a482a882189d067d26a9956e089782e9656aa1ae3d4e6e"} Dec 02 15:06:52 crc kubenswrapper[4814]: I1202 15:06:52.261889 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" event={"ID":"d093e349-97d2-49a3-8767-3c6a694ecbc9","Type":"ContainerStarted","Data":"e23afb3c71a066ed8f808a42fe69b78240d4090f420588b60ecd62645a304291"} Dec 02 15:06:52 crc kubenswrapper[4814]: I1202 15:06:52.284738 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" podStartSLOduration=2.284720131 podStartE2EDuration="2.284720131s" podCreationTimestamp="2025-12-02 15:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:06:52.28467632 +0000 UTC m=+2274.409038524" watchObservedRunningTime="2025-12-02 15:06:52.284720131 +0000 UTC m=+2274.409082335" Dec 02 15:06:54 crc kubenswrapper[4814]: I1202 15:06:54.280736 4814 generic.go:334] "Generic (PLEG): container finished" podID="d093e349-97d2-49a3-8767-3c6a694ecbc9" containerID="0f2f54ca6ae8ca58f2a482a882189d067d26a9956e089782e9656aa1ae3d4e6e" exitCode=0 Dec 02 15:06:54 crc kubenswrapper[4814]: I1202 15:06:54.280815 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" event={"ID":"d093e349-97d2-49a3-8767-3c6a694ecbc9","Type":"ContainerDied","Data":"0f2f54ca6ae8ca58f2a482a882189d067d26a9956e089782e9656aa1ae3d4e6e"} Dec 02 15:06:55 crc kubenswrapper[4814]: I1202 15:06:55.669170 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:55 crc kubenswrapper[4814]: I1202 15:06:55.831448 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-db-sync-config-data\") pod \"d093e349-97d2-49a3-8767-3c6a694ecbc9\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " Dec 02 15:06:55 crc kubenswrapper[4814]: I1202 15:06:55.831778 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-822wj\" (UniqueName: \"kubernetes.io/projected/d093e349-97d2-49a3-8767-3c6a694ecbc9-kube-api-access-822wj\") pod \"d093e349-97d2-49a3-8767-3c6a694ecbc9\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " Dec 02 15:06:55 crc kubenswrapper[4814]: I1202 15:06:55.831862 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-config-data\") pod \"d093e349-97d2-49a3-8767-3c6a694ecbc9\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " Dec 02 15:06:55 crc kubenswrapper[4814]: I1202 15:06:55.831929 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-combined-ca-bundle\") pod \"d093e349-97d2-49a3-8767-3c6a694ecbc9\" (UID: \"d093e349-97d2-49a3-8767-3c6a694ecbc9\") " Dec 02 15:06:55 crc kubenswrapper[4814]: I1202 15:06:55.837994 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d093e349-97d2-49a3-8767-3c6a694ecbc9" (UID: "d093e349-97d2-49a3-8767-3c6a694ecbc9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:06:55 crc kubenswrapper[4814]: I1202 15:06:55.838630 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d093e349-97d2-49a3-8767-3c6a694ecbc9-kube-api-access-822wj" (OuterVolumeSpecName: "kube-api-access-822wj") pod "d093e349-97d2-49a3-8767-3c6a694ecbc9" (UID: "d093e349-97d2-49a3-8767-3c6a694ecbc9"). InnerVolumeSpecName "kube-api-access-822wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:06:55 crc kubenswrapper[4814]: I1202 15:06:55.871866 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d093e349-97d2-49a3-8767-3c6a694ecbc9" (UID: "d093e349-97d2-49a3-8767-3c6a694ecbc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:06:55 crc kubenswrapper[4814]: I1202 15:06:55.885656 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-config-data" (OuterVolumeSpecName: "config-data") pod "d093e349-97d2-49a3-8767-3c6a694ecbc9" (UID: "d093e349-97d2-49a3-8767-3c6a694ecbc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:06:55 crc kubenswrapper[4814]: I1202 15:06:55.933575 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:55 crc kubenswrapper[4814]: I1202 15:06:55.933607 4814 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:55 crc kubenswrapper[4814]: I1202 15:06:55.933616 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-822wj\" (UniqueName: \"kubernetes.io/projected/d093e349-97d2-49a3-8767-3c6a694ecbc9-kube-api-access-822wj\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:55 crc kubenswrapper[4814]: I1202 15:06:55.933653 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d093e349-97d2-49a3-8767-3c6a694ecbc9-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.300213 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" event={"ID":"d093e349-97d2-49a3-8767-3c6a694ecbc9","Type":"ContainerDied","Data":"e23afb3c71a066ed8f808a42fe69b78240d4090f420588b60ecd62645a304291"} Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.300274 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e23afb3c71a066ed8f808a42fe69b78240d4090f420588b60ecd62645a304291" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.300578 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-phq62" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.592516 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:06:56 crc kubenswrapper[4814]: E1202 15:06:56.593565 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d093e349-97d2-49a3-8767-3c6a694ecbc9" containerName="watcher-kuttl-db-sync" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.593711 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d093e349-97d2-49a3-8767-3c6a694ecbc9" containerName="watcher-kuttl-db-sync" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.593970 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d093e349-97d2-49a3-8767-3c6a694ecbc9" containerName="watcher-kuttl-db-sync" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.594923 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.598043 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-sm6v5" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.598579 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.600157 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.607403 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.609137 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.612250 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.612426 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.629265 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.659056 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.668329 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.686345 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.687495 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.695536 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.698056 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744210 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744300 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grk4h\" (UniqueName: \"kubernetes.io/projected/09c6e69b-7b35-4f61-8780-9fbab13a63e9-kube-api-access-grk4h\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744324 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-logs\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744350 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrrd8\" (UniqueName: \"kubernetes.io/projected/66d98ba1-f5e2-481e-938d-ec7aa533d11a-kube-api-access-hrrd8\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744367 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744452 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744517 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09c6e69b-7b35-4f61-8780-9fbab13a63e9-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744631 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744682 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744721 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744748 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744828 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744872 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtxhb\" (UniqueName: \"kubernetes.io/projected/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-kube-api-access-qtxhb\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744893 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66d98ba1-f5e2-481e-938d-ec7aa533d11a-logs\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.744931 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.846318 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.846372 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.846397 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63431300-f0ba-4366-bdbf-4252b564a4e1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.846417 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.846435 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63431300-f0ba-4366-bdbf-4252b564a4e1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.846453 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.846476 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.846830 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63431300-f0ba-4366-bdbf-4252b564a4e1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.846969 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtxhb\" (UniqueName: \"kubernetes.io/projected/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-kube-api-access-qtxhb\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.847038 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66d98ba1-f5e2-481e-938d-ec7aa533d11a-logs\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.847100 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.847179 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.847533 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66d98ba1-f5e2-481e-938d-ec7aa533d11a-logs\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.847805 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grk4h\" (UniqueName: \"kubernetes.io/projected/09c6e69b-7b35-4f61-8780-9fbab13a63e9-kube-api-access-grk4h\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.847898 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-logs\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.848063 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrrd8\" (UniqueName: \"kubernetes.io/projected/66d98ba1-f5e2-481e-938d-ec7aa533d11a-kube-api-access-hrrd8\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.848206 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.848299 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr2lt\" (UniqueName: \"kubernetes.io/projected/63431300-f0ba-4366-bdbf-4252b564a4e1-kube-api-access-rr2lt\") pod \"watcher-kuttl-applier-0\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.848377 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.848473 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09c6e69b-7b35-4f61-8780-9fbab13a63e9-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.849398 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09c6e69b-7b35-4f61-8780-9fbab13a63e9-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.856323 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.856487 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-logs\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.856468 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.858324 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.861209 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.866753 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.868658 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.869174 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.874998 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.875686 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.879917 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grk4h\" (UniqueName: \"kubernetes.io/projected/09c6e69b-7b35-4f61-8780-9fbab13a63e9-kube-api-access-grk4h\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.880160 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrrd8\" (UniqueName: \"kubernetes.io/projected/66d98ba1-f5e2-481e-938d-ec7aa533d11a-kube-api-access-hrrd8\") pod \"watcher-kuttl-api-1\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.880981 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtxhb\" (UniqueName: \"kubernetes.io/projected/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-kube-api-access-qtxhb\") pod \"watcher-kuttl-api-0\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.913901 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.935651 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.943625 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.950762 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63431300-f0ba-4366-bdbf-4252b564a4e1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.950816 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63431300-f0ba-4366-bdbf-4252b564a4e1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.950859 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63431300-f0ba-4366-bdbf-4252b564a4e1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.950952 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr2lt\" (UniqueName: \"kubernetes.io/projected/63431300-f0ba-4366-bdbf-4252b564a4e1-kube-api-access-rr2lt\") pod \"watcher-kuttl-applier-0\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.951399 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63431300-f0ba-4366-bdbf-4252b564a4e1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.954288 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63431300-f0ba-4366-bdbf-4252b564a4e1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.960126 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63431300-f0ba-4366-bdbf-4252b564a4e1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:56 crc kubenswrapper[4814]: I1202 15:06:56.970425 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr2lt\" (UniqueName: \"kubernetes.io/projected/63431300-f0ba-4366-bdbf-4252b564a4e1-kube-api-access-rr2lt\") pod \"watcher-kuttl-applier-0\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:57 crc kubenswrapper[4814]: I1202 15:06:57.025333 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:06:58 crc kubenswrapper[4814]: I1202 15:06:57.593141 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:06:58 crc kubenswrapper[4814]: I1202 15:06:57.683209 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 15:06:58 crc kubenswrapper[4814]: I1202 15:06:57.689404 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:06:58 crc kubenswrapper[4814]: I1202 15:06:57.831739 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:06:58 crc kubenswrapper[4814]: I1202 15:06:58.329023 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"66d98ba1-f5e2-481e-938d-ec7aa533d11a","Type":"ContainerStarted","Data":"5d9e7239e8f6143369c5980c9c3741ae20f30331b097b13065f17c030f5378c7"} Dec 02 15:06:58 crc kubenswrapper[4814]: I1202 15:06:58.329337 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"66d98ba1-f5e2-481e-938d-ec7aa533d11a","Type":"ContainerStarted","Data":"b104e1643e7e0aa29acdd43683ddfba264e1e582f55ab8af1c3933b29fe3b660"} Dec 02 15:06:58 crc kubenswrapper[4814]: I1202 15:06:58.333829 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9","Type":"ContainerStarted","Data":"e3d089f69a0c887f91751722fa587694689b982a9adcbcd96ed83f4eb2d6bd3f"} Dec 02 15:06:58 crc kubenswrapper[4814]: I1202 15:06:58.333861 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9","Type":"ContainerStarted","Data":"f03c6c9445926da5acb44a3898958b4ea9b5a8d9e7d4c5cbad5b3c62cf33632e"} Dec 02 15:06:58 crc kubenswrapper[4814]: I1202 15:06:58.342935 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09c6e69b-7b35-4f61-8780-9fbab13a63e9","Type":"ContainerStarted","Data":"11fa7620ead5ac1c958d056adbff5abc07682d8940b62759365a04464ffbf151"} Dec 02 15:06:58 crc kubenswrapper[4814]: I1202 15:06:58.342975 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09c6e69b-7b35-4f61-8780-9fbab13a63e9","Type":"ContainerStarted","Data":"e3aa3d2fd731ddebd920b7bd9811abc7a61b0ac4f914da4692fbee8ce3392d53"} Dec 02 15:06:58 crc kubenswrapper[4814]: I1202 15:06:58.344884 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"63431300-f0ba-4366-bdbf-4252b564a4e1","Type":"ContainerStarted","Data":"9d9ee428568cbd5458748cf7dfb3ff124421ac5ebe7867156d59b45b81d54fc8"} Dec 02 15:06:58 crc kubenswrapper[4814]: I1202 15:06:58.344927 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"63431300-f0ba-4366-bdbf-4252b564a4e1","Type":"ContainerStarted","Data":"d616a76d1f493ac410b5a0516ec4083ef882851492a36f38779665a82e6194d5"} Dec 02 15:06:58 crc kubenswrapper[4814]: I1202 15:06:58.373307 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.373290504 podStartE2EDuration="2.373290504s" podCreationTimestamp="2025-12-02 15:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:06:58.372587827 +0000 UTC m=+2280.496950031" watchObservedRunningTime="2025-12-02 15:06:58.373290504 +0000 UTC m=+2280.497652708" Dec 02 15:06:58 crc kubenswrapper[4814]: I1202 15:06:58.396448 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.396419419 podStartE2EDuration="2.396419419s" podCreationTimestamp="2025-12-02 15:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:06:58.388174434 +0000 UTC m=+2280.512536638" watchObservedRunningTime="2025-12-02 15:06:58.396419419 +0000 UTC m=+2280.520781623" Dec 02 15:06:59 crc kubenswrapper[4814]: I1202 15:06:59.353903 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"66d98ba1-f5e2-481e-938d-ec7aa533d11a","Type":"ContainerStarted","Data":"bc910bb3a2e4e1e4463086be16e4f96829134909a33464680c13dd3f8ec0a4a5"} Dec 02 15:06:59 crc kubenswrapper[4814]: I1202 15:06:59.355447 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:06:59 crc kubenswrapper[4814]: I1202 15:06:59.363126 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9","Type":"ContainerStarted","Data":"735312b28f66c232bd138935b0f6f91cc2fdeccdc5bda6f489a38a8e236a0b77"} Dec 02 15:06:59 crc kubenswrapper[4814]: I1202 15:06:59.363734 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:06:59 crc kubenswrapper[4814]: I1202 15:06:59.388647 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=3.388629798 podStartE2EDuration="3.388629798s" podCreationTimestamp="2025-12-02 15:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:06:59.381682295 +0000 UTC m=+2281.506044539" watchObservedRunningTime="2025-12-02 15:06:59.388629798 +0000 UTC m=+2281.512992002" Dec 02 15:06:59 crc kubenswrapper[4814]: I1202 15:06:59.414495 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=3.4144766300000002 podStartE2EDuration="3.41447663s" podCreationTimestamp="2025-12-02 15:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:06:59.408368798 +0000 UTC m=+2281.532731012" watchObservedRunningTime="2025-12-02 15:06:59.41447663 +0000 UTC m=+2281.538838854" Dec 02 15:07:00 crc kubenswrapper[4814]: I1202 15:07:00.517287 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7gbdv"] Dec 02 15:07:00 crc kubenswrapper[4814]: I1202 15:07:00.520751 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:00 crc kubenswrapper[4814]: I1202 15:07:00.529158 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7gbdv"] Dec 02 15:07:00 crc kubenswrapper[4814]: I1202 15:07:00.635825 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brdh7\" (UniqueName: \"kubernetes.io/projected/c8499ccd-670d-4221-8ed6-96678e23daeb-kube-api-access-brdh7\") pod \"community-operators-7gbdv\" (UID: \"c8499ccd-670d-4221-8ed6-96678e23daeb\") " pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:00 crc kubenswrapper[4814]: I1202 15:07:00.636087 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8499ccd-670d-4221-8ed6-96678e23daeb-catalog-content\") pod \"community-operators-7gbdv\" (UID: \"c8499ccd-670d-4221-8ed6-96678e23daeb\") " pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:00 crc kubenswrapper[4814]: I1202 15:07:00.636197 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8499ccd-670d-4221-8ed6-96678e23daeb-utilities\") pod \"community-operators-7gbdv\" (UID: \"c8499ccd-670d-4221-8ed6-96678e23daeb\") " pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:00 crc kubenswrapper[4814]: I1202 15:07:00.737637 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8499ccd-670d-4221-8ed6-96678e23daeb-utilities\") pod \"community-operators-7gbdv\" (UID: \"c8499ccd-670d-4221-8ed6-96678e23daeb\") " pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:00 crc kubenswrapper[4814]: I1202 15:07:00.737740 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brdh7\" (UniqueName: \"kubernetes.io/projected/c8499ccd-670d-4221-8ed6-96678e23daeb-kube-api-access-brdh7\") pod \"community-operators-7gbdv\" (UID: \"c8499ccd-670d-4221-8ed6-96678e23daeb\") " pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:00 crc kubenswrapper[4814]: I1202 15:07:00.737863 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8499ccd-670d-4221-8ed6-96678e23daeb-catalog-content\") pod \"community-operators-7gbdv\" (UID: \"c8499ccd-670d-4221-8ed6-96678e23daeb\") " pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:00 crc kubenswrapper[4814]: I1202 15:07:00.738106 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8499ccd-670d-4221-8ed6-96678e23daeb-utilities\") pod \"community-operators-7gbdv\" (UID: \"c8499ccd-670d-4221-8ed6-96678e23daeb\") " pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:00 crc kubenswrapper[4814]: I1202 15:07:00.738406 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8499ccd-670d-4221-8ed6-96678e23daeb-catalog-content\") pod \"community-operators-7gbdv\" (UID: \"c8499ccd-670d-4221-8ed6-96678e23daeb\") " pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:00 crc kubenswrapper[4814]: I1202 15:07:00.767571 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brdh7\" (UniqueName: \"kubernetes.io/projected/c8499ccd-670d-4221-8ed6-96678e23daeb-kube-api-access-brdh7\") pod \"community-operators-7gbdv\" (UID: \"c8499ccd-670d-4221-8ed6-96678e23daeb\") " pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:00 crc kubenswrapper[4814]: I1202 15:07:00.852201 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:01 crc kubenswrapper[4814]: I1202 15:07:01.338643 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7gbdv"] Dec 02 15:07:01 crc kubenswrapper[4814]: I1202 15:07:01.392594 4814 generic.go:334] "Generic (PLEG): container finished" podID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerID="11fa7620ead5ac1c958d056adbff5abc07682d8940b62759365a04464ffbf151" exitCode=1 Dec 02 15:07:01 crc kubenswrapper[4814]: I1202 15:07:01.392643 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09c6e69b-7b35-4f61-8780-9fbab13a63e9","Type":"ContainerDied","Data":"11fa7620ead5ac1c958d056adbff5abc07682d8940b62759365a04464ffbf151"} Dec 02 15:07:01 crc kubenswrapper[4814]: I1202 15:07:01.393135 4814 scope.go:117] "RemoveContainer" containerID="11fa7620ead5ac1c958d056adbff5abc07682d8940b62759365a04464ffbf151" Dec 02 15:07:01 crc kubenswrapper[4814]: I1202 15:07:01.396887 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 15:07:01 crc kubenswrapper[4814]: I1202 15:07:01.397051 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gbdv" event={"ID":"c8499ccd-670d-4221-8ed6-96678e23daeb","Type":"ContainerStarted","Data":"972b3d29bb862b92eb6c6299635f2ded62c0d046e8fe063bde90c275455510d6"} Dec 02 15:07:01 crc kubenswrapper[4814]: E1202 15:07:01.604333 4814 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8499ccd_670d_4221_8ed6_96678e23daeb.slice/crio-conmon-04dc6ac82e12d0e99702180ff4630c93f9129f9acf855f07d37d430d6f7e1083.scope\": RecentStats: unable to find data in memory cache]" Dec 02 15:07:01 crc kubenswrapper[4814]: I1202 15:07:01.812461 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:07:01 crc kubenswrapper[4814]: I1202 15:07:01.878631 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:07:01 crc kubenswrapper[4814]: I1202 15:07:01.936398 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:07:01 crc kubenswrapper[4814]: I1202 15:07:01.944565 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:07:02 crc kubenswrapper[4814]: I1202 15:07:02.025439 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:07:02 crc kubenswrapper[4814]: I1202 15:07:02.407871 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09c6e69b-7b35-4f61-8780-9fbab13a63e9","Type":"ContainerStarted","Data":"2b3d907e8992193c067b873dd0bc84af40f08fcb8373a9525bb18e7f91369618"} Dec 02 15:07:02 crc kubenswrapper[4814]: I1202 15:07:02.411541 4814 generic.go:334] "Generic (PLEG): container finished" podID="c8499ccd-670d-4221-8ed6-96678e23daeb" containerID="04dc6ac82e12d0e99702180ff4630c93f9129f9acf855f07d37d430d6f7e1083" exitCode=0 Dec 02 15:07:02 crc kubenswrapper[4814]: I1202 15:07:02.411817 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gbdv" event={"ID":"c8499ccd-670d-4221-8ed6-96678e23daeb","Type":"ContainerDied","Data":"04dc6ac82e12d0e99702180ff4630c93f9129f9acf855f07d37d430d6f7e1083"} Dec 02 15:07:04 crc kubenswrapper[4814]: I1202 15:07:04.432735 4814 generic.go:334] "Generic (PLEG): container finished" podID="c8499ccd-670d-4221-8ed6-96678e23daeb" containerID="c9ee4ac242d563bf06032bfb245ffdc2054539ed55bfbf53f102620c3a235bff" exitCode=0 Dec 02 15:07:04 crc kubenswrapper[4814]: I1202 15:07:04.432786 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gbdv" event={"ID":"c8499ccd-670d-4221-8ed6-96678e23daeb","Type":"ContainerDied","Data":"c9ee4ac242d563bf06032bfb245ffdc2054539ed55bfbf53f102620c3a235bff"} Dec 02 15:07:05 crc kubenswrapper[4814]: I1202 15:07:05.449289 4814 generic.go:334] "Generic (PLEG): container finished" podID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerID="2b3d907e8992193c067b873dd0bc84af40f08fcb8373a9525bb18e7f91369618" exitCode=1 Dec 02 15:07:05 crc kubenswrapper[4814]: I1202 15:07:05.449796 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09c6e69b-7b35-4f61-8780-9fbab13a63e9","Type":"ContainerDied","Data":"2b3d907e8992193c067b873dd0bc84af40f08fcb8373a9525bb18e7f91369618"} Dec 02 15:07:05 crc kubenswrapper[4814]: I1202 15:07:05.449858 4814 scope.go:117] "RemoveContainer" containerID="11fa7620ead5ac1c958d056adbff5abc07682d8940b62759365a04464ffbf151" Dec 02 15:07:05 crc kubenswrapper[4814]: I1202 15:07:05.451177 4814 scope.go:117] "RemoveContainer" containerID="2b3d907e8992193c067b873dd0bc84af40f08fcb8373a9525bb18e7f91369618" Dec 02 15:07:05 crc kubenswrapper[4814]: E1202 15:07:05.452299 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:07:06 crc kubenswrapper[4814]: I1202 15:07:06.474102 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gbdv" event={"ID":"c8499ccd-670d-4221-8ed6-96678e23daeb","Type":"ContainerStarted","Data":"f1e43d31332e787f4b7a9ebb4ca552c156144d7791fd1644ccf3808d18c6d680"} Dec 02 15:07:06 crc kubenswrapper[4814]: I1202 15:07:06.503112 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7gbdv" podStartSLOduration=3.550251892 podStartE2EDuration="6.503093127s" podCreationTimestamp="2025-12-02 15:07:00 +0000 UTC" firstStartedPulling="2025-12-02 15:07:02.413845961 +0000 UTC m=+2284.538208165" lastFinishedPulling="2025-12-02 15:07:05.366687196 +0000 UTC m=+2287.491049400" observedRunningTime="2025-12-02 15:07:06.49918579 +0000 UTC m=+2288.623548004" watchObservedRunningTime="2025-12-02 15:07:06.503093127 +0000 UTC m=+2288.627455331" Dec 02 15:07:06 crc kubenswrapper[4814]: I1202 15:07:06.915078 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:07:06 crc kubenswrapper[4814]: I1202 15:07:06.915171 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:07:06 crc kubenswrapper[4814]: I1202 15:07:06.916327 4814 scope.go:117] "RemoveContainer" containerID="2b3d907e8992193c067b873dd0bc84af40f08fcb8373a9525bb18e7f91369618" Dec 02 15:07:06 crc kubenswrapper[4814]: E1202 15:07:06.916619 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:07:06 crc kubenswrapper[4814]: I1202 15:07:06.936712 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:07:06 crc kubenswrapper[4814]: I1202 15:07:06.945007 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:07:06 crc kubenswrapper[4814]: I1202 15:07:06.951357 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:07:06 crc kubenswrapper[4814]: I1202 15:07:06.957410 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:07:07 crc kubenswrapper[4814]: I1202 15:07:07.025879 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:07:07 crc kubenswrapper[4814]: I1202 15:07:07.053983 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:07:07 crc kubenswrapper[4814]: I1202 15:07:07.490626 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:07:07 crc kubenswrapper[4814]: I1202 15:07:07.495100 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:07:07 crc kubenswrapper[4814]: I1202 15:07:07.508261 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:07:10 crc kubenswrapper[4814]: I1202 15:07:10.853285 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:10 crc kubenswrapper[4814]: I1202 15:07:10.853533 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:10 crc kubenswrapper[4814]: I1202 15:07:10.913687 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:11 crc kubenswrapper[4814]: I1202 15:07:11.574843 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:15 crc kubenswrapper[4814]: I1202 15:07:15.108084 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7gbdv"] Dec 02 15:07:15 crc kubenswrapper[4814]: I1202 15:07:15.108753 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7gbdv" podUID="c8499ccd-670d-4221-8ed6-96678e23daeb" containerName="registry-server" containerID="cri-o://f1e43d31332e787f4b7a9ebb4ca552c156144d7791fd1644ccf3808d18c6d680" gracePeriod=2 Dec 02 15:07:15 crc kubenswrapper[4814]: I1202 15:07:15.553587 4814 generic.go:334] "Generic (PLEG): container finished" podID="c8499ccd-670d-4221-8ed6-96678e23daeb" containerID="f1e43d31332e787f4b7a9ebb4ca552c156144d7791fd1644ccf3808d18c6d680" exitCode=0 Dec 02 15:07:15 crc kubenswrapper[4814]: I1202 15:07:15.553652 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gbdv" event={"ID":"c8499ccd-670d-4221-8ed6-96678e23daeb","Type":"ContainerDied","Data":"f1e43d31332e787f4b7a9ebb4ca552c156144d7791fd1644ccf3808d18c6d680"} Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.107384 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.190568 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8499ccd-670d-4221-8ed6-96678e23daeb-utilities\") pod \"c8499ccd-670d-4221-8ed6-96678e23daeb\" (UID: \"c8499ccd-670d-4221-8ed6-96678e23daeb\") " Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.192389 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8499ccd-670d-4221-8ed6-96678e23daeb-utilities" (OuterVolumeSpecName: "utilities") pod "c8499ccd-670d-4221-8ed6-96678e23daeb" (UID: "c8499ccd-670d-4221-8ed6-96678e23daeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.192493 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8499ccd-670d-4221-8ed6-96678e23daeb-catalog-content\") pod \"c8499ccd-670d-4221-8ed6-96678e23daeb\" (UID: \"c8499ccd-670d-4221-8ed6-96678e23daeb\") " Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.192569 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brdh7\" (UniqueName: \"kubernetes.io/projected/c8499ccd-670d-4221-8ed6-96678e23daeb-kube-api-access-brdh7\") pod \"c8499ccd-670d-4221-8ed6-96678e23daeb\" (UID: \"c8499ccd-670d-4221-8ed6-96678e23daeb\") " Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.199250 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8499ccd-670d-4221-8ed6-96678e23daeb-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.199603 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8499ccd-670d-4221-8ed6-96678e23daeb-kube-api-access-brdh7" (OuterVolumeSpecName: "kube-api-access-brdh7") pod "c8499ccd-670d-4221-8ed6-96678e23daeb" (UID: "c8499ccd-670d-4221-8ed6-96678e23daeb"). InnerVolumeSpecName "kube-api-access-brdh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.254628 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8499ccd-670d-4221-8ed6-96678e23daeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8499ccd-670d-4221-8ed6-96678e23daeb" (UID: "c8499ccd-670d-4221-8ed6-96678e23daeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.300978 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8499ccd-670d-4221-8ed6-96678e23daeb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.301003 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brdh7\" (UniqueName: \"kubernetes.io/projected/c8499ccd-670d-4221-8ed6-96678e23daeb-kube-api-access-brdh7\") on node \"crc\" DevicePath \"\"" Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.566924 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gbdv" event={"ID":"c8499ccd-670d-4221-8ed6-96678e23daeb","Type":"ContainerDied","Data":"972b3d29bb862b92eb6c6299635f2ded62c0d046e8fe063bde90c275455510d6"} Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.566977 4814 scope.go:117] "RemoveContainer" containerID="f1e43d31332e787f4b7a9ebb4ca552c156144d7791fd1644ccf3808d18c6d680" Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.567034 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gbdv" Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.598036 4814 scope.go:117] "RemoveContainer" containerID="c9ee4ac242d563bf06032bfb245ffdc2054539ed55bfbf53f102620c3a235bff" Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.633570 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7gbdv"] Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.644406 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7gbdv"] Dec 02 15:07:16 crc kubenswrapper[4814]: I1202 15:07:16.651096 4814 scope.go:117] "RemoveContainer" containerID="04dc6ac82e12d0e99702180ff4630c93f9129f9acf855f07d37d430d6f7e1083" Dec 02 15:07:18 crc kubenswrapper[4814]: I1202 15:07:18.171458 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8499ccd-670d-4221-8ed6-96678e23daeb" path="/var/lib/kubelet/pods/c8499ccd-670d-4221-8ed6-96678e23daeb/volumes" Dec 02 15:07:20 crc kubenswrapper[4814]: I1202 15:07:20.651662 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:07:20 crc kubenswrapper[4814]: I1202 15:07:20.651994 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:07:21 crc kubenswrapper[4814]: I1202 15:07:21.145592 4814 scope.go:117] "RemoveContainer" containerID="2b3d907e8992193c067b873dd0bc84af40f08fcb8373a9525bb18e7f91369618" Dec 02 15:07:21 crc kubenswrapper[4814]: I1202 15:07:21.637167 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09c6e69b-7b35-4f61-8780-9fbab13a63e9","Type":"ContainerStarted","Data":"cd6a0edce17f4a307f1f1e665869bd379c03c0688e65816cc10e0f20687b3381"} Dec 02 15:07:24 crc kubenswrapper[4814]: I1202 15:07:24.667203 4814 generic.go:334] "Generic (PLEG): container finished" podID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerID="cd6a0edce17f4a307f1f1e665869bd379c03c0688e65816cc10e0f20687b3381" exitCode=1 Dec 02 15:07:24 crc kubenswrapper[4814]: I1202 15:07:24.667264 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09c6e69b-7b35-4f61-8780-9fbab13a63e9","Type":"ContainerDied","Data":"cd6a0edce17f4a307f1f1e665869bd379c03c0688e65816cc10e0f20687b3381"} Dec 02 15:07:24 crc kubenswrapper[4814]: I1202 15:07:24.667584 4814 scope.go:117] "RemoveContainer" containerID="2b3d907e8992193c067b873dd0bc84af40f08fcb8373a9525bb18e7f91369618" Dec 02 15:07:24 crc kubenswrapper[4814]: I1202 15:07:24.668602 4814 scope.go:117] "RemoveContainer" containerID="cd6a0edce17f4a307f1f1e665869bd379c03c0688e65816cc10e0f20687b3381" Dec 02 15:07:24 crc kubenswrapper[4814]: E1202 15:07:24.668968 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:07:26 crc kubenswrapper[4814]: I1202 15:07:26.914523 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:07:26 crc kubenswrapper[4814]: I1202 15:07:26.914985 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:07:26 crc kubenswrapper[4814]: I1202 15:07:26.915022 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:07:26 crc kubenswrapper[4814]: I1202 15:07:26.915049 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:07:26 crc kubenswrapper[4814]: I1202 15:07:26.916100 4814 scope.go:117] "RemoveContainer" containerID="cd6a0edce17f4a307f1f1e665869bd379c03c0688e65816cc10e0f20687b3381" Dec 02 15:07:26 crc kubenswrapper[4814]: E1202 15:07:26.916800 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:07:38 crc kubenswrapper[4814]: I1202 15:07:38.159987 4814 scope.go:117] "RemoveContainer" containerID="cd6a0edce17f4a307f1f1e665869bd379c03c0688e65816cc10e0f20687b3381" Dec 02 15:07:38 crc kubenswrapper[4814]: E1202 15:07:38.161029 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:07:50 crc kubenswrapper[4814]: I1202 15:07:50.651657 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:07:50 crc kubenswrapper[4814]: I1202 15:07:50.654705 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:07:50 crc kubenswrapper[4814]: I1202 15:07:50.655175 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 15:07:50 crc kubenswrapper[4814]: I1202 15:07:50.656410 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584"} pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:07:50 crc kubenswrapper[4814]: I1202 15:07:50.656679 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" containerID="cri-o://aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" gracePeriod=600 Dec 02 15:07:50 crc kubenswrapper[4814]: E1202 15:07:50.788663 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:07:50 crc kubenswrapper[4814]: I1202 15:07:50.904831 4814 generic.go:334] "Generic (PLEG): container finished" podID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" exitCode=0 Dec 02 15:07:50 crc kubenswrapper[4814]: I1202 15:07:50.904892 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerDied","Data":"aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584"} Dec 02 15:07:50 crc kubenswrapper[4814]: I1202 15:07:50.904934 4814 scope.go:117] "RemoveContainer" containerID="908761a95c0f9f6447a8c26806b20d7573f99def1bcbce124d6957c579039255" Dec 02 15:07:50 crc kubenswrapper[4814]: I1202 15:07:50.908881 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:07:50 crc kubenswrapper[4814]: E1202 15:07:50.909432 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:07:51 crc kubenswrapper[4814]: I1202 15:07:51.145021 4814 scope.go:117] "RemoveContainer" containerID="cd6a0edce17f4a307f1f1e665869bd379c03c0688e65816cc10e0f20687b3381" Dec 02 15:07:51 crc kubenswrapper[4814]: I1202 15:07:51.916340 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09c6e69b-7b35-4f61-8780-9fbab13a63e9","Type":"ContainerStarted","Data":"e5406cc577f6deb0467ee6e95444489c364ad88b55fc042a9e81a50032c2c73c"} Dec 02 15:07:54 crc kubenswrapper[4814]: I1202 15:07:54.968735 4814 generic.go:334] "Generic (PLEG): container finished" podID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerID="e5406cc577f6deb0467ee6e95444489c364ad88b55fc042a9e81a50032c2c73c" exitCode=1 Dec 02 15:07:54 crc kubenswrapper[4814]: I1202 15:07:54.968761 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09c6e69b-7b35-4f61-8780-9fbab13a63e9","Type":"ContainerDied","Data":"e5406cc577f6deb0467ee6e95444489c364ad88b55fc042a9e81a50032c2c73c"} Dec 02 15:07:54 crc kubenswrapper[4814]: I1202 15:07:54.969100 4814 scope.go:117] "RemoveContainer" containerID="cd6a0edce17f4a307f1f1e665869bd379c03c0688e65816cc10e0f20687b3381" Dec 02 15:07:54 crc kubenswrapper[4814]: I1202 15:07:54.970631 4814 scope.go:117] "RemoveContainer" containerID="e5406cc577f6deb0467ee6e95444489c364ad88b55fc042a9e81a50032c2c73c" Dec 02 15:07:54 crc kubenswrapper[4814]: E1202 15:07:54.970899 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:07:56 crc kubenswrapper[4814]: I1202 15:07:56.914611 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:07:56 crc kubenswrapper[4814]: I1202 15:07:56.915538 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:07:56 crc kubenswrapper[4814]: I1202 15:07:56.915603 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:07:56 crc kubenswrapper[4814]: I1202 15:07:56.915656 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:07:56 crc kubenswrapper[4814]: I1202 15:07:56.916287 4814 scope.go:117] "RemoveContainer" containerID="e5406cc577f6deb0467ee6e95444489c364ad88b55fc042a9e81a50032c2c73c" Dec 02 15:07:56 crc kubenswrapper[4814]: E1202 15:07:56.916558 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:07:56 crc kubenswrapper[4814]: I1202 15:07:56.995072 4814 scope.go:117] "RemoveContainer" containerID="e5406cc577f6deb0467ee6e95444489c364ad88b55fc042a9e81a50032c2c73c" Dec 02 15:07:56 crc kubenswrapper[4814]: E1202 15:07:56.995452 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:08:00 crc kubenswrapper[4814]: I1202 15:08:00.498145 4814 scope.go:117] "RemoveContainer" containerID="70f6682c2424d71ef75c692f245537db158f637639108dd494ed2c5ef6aa6f3a" Dec 02 15:08:00 crc kubenswrapper[4814]: I1202 15:08:00.530023 4814 scope.go:117] "RemoveContainer" containerID="4389b45cbc6bf49a651ff8f1f7aa8d53515c436ec6aa7fe3b6b4dd051751ad25" Dec 02 15:08:00 crc kubenswrapper[4814]: I1202 15:08:00.570155 4814 scope.go:117] "RemoveContainer" containerID="cc0286729897215dd1883bee7dc140ef947ab3fbc2ebaab9f9bef98b69481851" Dec 02 15:08:00 crc kubenswrapper[4814]: I1202 15:08:00.620974 4814 scope.go:117] "RemoveContainer" containerID="dae2fd8167ef76290c1bcb9573f25245d53d2e72cc9e003d7bd2c4368eeee32f" Dec 02 15:08:01 crc kubenswrapper[4814]: I1202 15:08:01.144665 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:08:01 crc kubenswrapper[4814]: E1202 15:08:01.145081 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:08:10 crc kubenswrapper[4814]: I1202 15:08:10.145888 4814 scope.go:117] "RemoveContainer" containerID="e5406cc577f6deb0467ee6e95444489c364ad88b55fc042a9e81a50032c2c73c" Dec 02 15:08:10 crc kubenswrapper[4814]: E1202 15:08:10.147460 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:08:13 crc kubenswrapper[4814]: I1202 15:08:13.146057 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:08:13 crc kubenswrapper[4814]: E1202 15:08:13.146754 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:08:17 crc kubenswrapper[4814]: E1202 15:08:17.153699 4814 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.13:33058->38.102.83.13:39561: write tcp 38.102.83.13:33058->38.102.83.13:39561: write: broken pipe Dec 02 15:08:22 crc kubenswrapper[4814]: I1202 15:08:22.145930 4814 scope.go:117] "RemoveContainer" containerID="e5406cc577f6deb0467ee6e95444489c364ad88b55fc042a9e81a50032c2c73c" Dec 02 15:08:22 crc kubenswrapper[4814]: E1202 15:08:22.146739 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:08:26 crc kubenswrapper[4814]: I1202 15:08:26.145206 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:08:26 crc kubenswrapper[4814]: E1202 15:08:26.145924 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:08:36 crc kubenswrapper[4814]: I1202 15:08:36.145126 4814 scope.go:117] "RemoveContainer" containerID="e5406cc577f6deb0467ee6e95444489c364ad88b55fc042a9e81a50032c2c73c" Dec 02 15:08:36 crc kubenswrapper[4814]: I1202 15:08:36.362809 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09c6e69b-7b35-4f61-8780-9fbab13a63e9","Type":"ContainerStarted","Data":"b5f66d8f9711dfe9223f98bc14d90864e22b0bf9eab80d5eee9b3627e14aa7ce"} Dec 02 15:08:36 crc kubenswrapper[4814]: I1202 15:08:36.914894 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:08:36 crc kubenswrapper[4814]: I1202 15:08:36.942337 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:08:37 crc kubenswrapper[4814]: I1202 15:08:37.144907 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:08:37 crc kubenswrapper[4814]: E1202 15:08:37.145362 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:08:37 crc kubenswrapper[4814]: I1202 15:08:37.373032 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:08:37 crc kubenswrapper[4814]: I1202 15:08:37.409958 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:08:39 crc kubenswrapper[4814]: I1202 15:08:39.391110 4814 generic.go:334] "Generic (PLEG): container finished" podID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerID="b5f66d8f9711dfe9223f98bc14d90864e22b0bf9eab80d5eee9b3627e14aa7ce" exitCode=1 Dec 02 15:08:39 crc kubenswrapper[4814]: I1202 15:08:39.391215 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09c6e69b-7b35-4f61-8780-9fbab13a63e9","Type":"ContainerDied","Data":"b5f66d8f9711dfe9223f98bc14d90864e22b0bf9eab80d5eee9b3627e14aa7ce"} Dec 02 15:08:39 crc kubenswrapper[4814]: I1202 15:08:39.391411 4814 scope.go:117] "RemoveContainer" containerID="e5406cc577f6deb0467ee6e95444489c364ad88b55fc042a9e81a50032c2c73c" Dec 02 15:08:39 crc kubenswrapper[4814]: I1202 15:08:39.391709 4814 scope.go:117] "RemoveContainer" containerID="b5f66d8f9711dfe9223f98bc14d90864e22b0bf9eab80d5eee9b3627e14aa7ce" Dec 02 15:08:39 crc kubenswrapper[4814]: E1202 15:08:39.391987 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:08:40 crc kubenswrapper[4814]: I1202 15:08:40.404327 4814 scope.go:117] "RemoveContainer" containerID="b5f66d8f9711dfe9223f98bc14d90864e22b0bf9eab80d5eee9b3627e14aa7ce" Dec 02 15:08:40 crc kubenswrapper[4814]: E1202 15:08:40.405590 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:08:46 crc kubenswrapper[4814]: I1202 15:08:46.914074 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:08:46 crc kubenswrapper[4814]: I1202 15:08:46.915562 4814 scope.go:117] "RemoveContainer" containerID="b5f66d8f9711dfe9223f98bc14d90864e22b0bf9eab80d5eee9b3627e14aa7ce" Dec 02 15:08:46 crc kubenswrapper[4814]: E1202 15:08:46.915975 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:08:50 crc kubenswrapper[4814]: I1202 15:08:50.144633 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:08:50 crc kubenswrapper[4814]: E1202 15:08:50.145287 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:08:56 crc kubenswrapper[4814]: I1202 15:08:56.914779 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:08:56 crc kubenswrapper[4814]: I1202 15:08:56.915391 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:08:56 crc kubenswrapper[4814]: I1202 15:08:56.916043 4814 scope.go:117] "RemoveContainer" containerID="b5f66d8f9711dfe9223f98bc14d90864e22b0bf9eab80d5eee9b3627e14aa7ce" Dec 02 15:08:56 crc kubenswrapper[4814]: E1202 15:08:56.916259 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:09:05 crc kubenswrapper[4814]: I1202 15:09:05.146137 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:09:05 crc kubenswrapper[4814]: E1202 15:09:05.147853 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:09:12 crc kubenswrapper[4814]: I1202 15:09:12.144929 4814 scope.go:117] "RemoveContainer" containerID="b5f66d8f9711dfe9223f98bc14d90864e22b0bf9eab80d5eee9b3627e14aa7ce" Dec 02 15:09:12 crc kubenswrapper[4814]: E1202 15:09:12.145620 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:09:16 crc kubenswrapper[4814]: I1202 15:09:16.145031 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:09:16 crc kubenswrapper[4814]: E1202 15:09:16.146111 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:09:23 crc kubenswrapper[4814]: I1202 15:09:23.145504 4814 scope.go:117] "RemoveContainer" containerID="b5f66d8f9711dfe9223f98bc14d90864e22b0bf9eab80d5eee9b3627e14aa7ce" Dec 02 15:09:23 crc kubenswrapper[4814]: E1202 15:09:23.146366 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:09:28 crc kubenswrapper[4814]: I1202 15:09:28.153903 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:09:28 crc kubenswrapper[4814]: E1202 15:09:28.154792 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:09:35 crc kubenswrapper[4814]: I1202 15:09:35.145075 4814 scope.go:117] "RemoveContainer" containerID="b5f66d8f9711dfe9223f98bc14d90864e22b0bf9eab80d5eee9b3627e14aa7ce" Dec 02 15:09:35 crc kubenswrapper[4814]: E1202 15:09:35.145878 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:09:42 crc kubenswrapper[4814]: I1202 15:09:42.145132 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:09:42 crc kubenswrapper[4814]: E1202 15:09:42.145961 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:09:48 crc kubenswrapper[4814]: I1202 15:09:48.156010 4814 scope.go:117] "RemoveContainer" containerID="b5f66d8f9711dfe9223f98bc14d90864e22b0bf9eab80d5eee9b3627e14aa7ce" Dec 02 15:09:48 crc kubenswrapper[4814]: E1202 15:09:48.156625 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:09:56 crc kubenswrapper[4814]: I1202 15:09:56.144442 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:09:56 crc kubenswrapper[4814]: E1202 15:09:56.145121 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:09:59 crc kubenswrapper[4814]: I1202 15:09:59.144737 4814 scope.go:117] "RemoveContainer" containerID="b5f66d8f9711dfe9223f98bc14d90864e22b0bf9eab80d5eee9b3627e14aa7ce" Dec 02 15:10:00 crc kubenswrapper[4814]: I1202 15:10:00.334644 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09c6e69b-7b35-4f61-8780-9fbab13a63e9","Type":"ContainerStarted","Data":"7c83f286672515061aa5b03cd3052b75749877f06a2612c0c8f41eeba67c3c9d"} Dec 02 15:10:02 crc kubenswrapper[4814]: I1202 15:10:02.356955 4814 generic.go:334] "Generic (PLEG): container finished" podID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerID="7c83f286672515061aa5b03cd3052b75749877f06a2612c0c8f41eeba67c3c9d" exitCode=1 Dec 02 15:10:02 crc kubenswrapper[4814]: I1202 15:10:02.357056 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09c6e69b-7b35-4f61-8780-9fbab13a63e9","Type":"ContainerDied","Data":"7c83f286672515061aa5b03cd3052b75749877f06a2612c0c8f41eeba67c3c9d"} Dec 02 15:10:02 crc kubenswrapper[4814]: I1202 15:10:02.357427 4814 scope.go:117] "RemoveContainer" containerID="b5f66d8f9711dfe9223f98bc14d90864e22b0bf9eab80d5eee9b3627e14aa7ce" Dec 02 15:10:02 crc kubenswrapper[4814]: I1202 15:10:02.358423 4814 scope.go:117] "RemoveContainer" containerID="7c83f286672515061aa5b03cd3052b75749877f06a2612c0c8f41eeba67c3c9d" Dec 02 15:10:02 crc kubenswrapper[4814]: E1202 15:10:02.358960 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:10:06 crc kubenswrapper[4814]: I1202 15:10:06.914266 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:10:06 crc kubenswrapper[4814]: I1202 15:10:06.914797 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:10:06 crc kubenswrapper[4814]: I1202 15:10:06.915332 4814 scope.go:117] "RemoveContainer" containerID="7c83f286672515061aa5b03cd3052b75749877f06a2612c0c8f41eeba67c3c9d" Dec 02 15:10:06 crc kubenswrapper[4814]: E1202 15:10:06.915561 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:10:09 crc kubenswrapper[4814]: I1202 15:10:09.145036 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:10:09 crc kubenswrapper[4814]: E1202 15:10:09.145447 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:10:19 crc kubenswrapper[4814]: I1202 15:10:19.146996 4814 scope.go:117] "RemoveContainer" containerID="7c83f286672515061aa5b03cd3052b75749877f06a2612c0c8f41eeba67c3c9d" Dec 02 15:10:19 crc kubenswrapper[4814]: E1202 15:10:19.149836 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:10:21 crc kubenswrapper[4814]: I1202 15:10:21.144642 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:10:21 crc kubenswrapper[4814]: E1202 15:10:21.145127 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:10:26 crc kubenswrapper[4814]: I1202 15:10:26.914784 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:10:26 crc kubenswrapper[4814]: I1202 15:10:26.915373 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:10:26 crc kubenswrapper[4814]: I1202 15:10:26.915989 4814 scope.go:117] "RemoveContainer" containerID="7c83f286672515061aa5b03cd3052b75749877f06a2612c0c8f41eeba67c3c9d" Dec 02 15:10:26 crc kubenswrapper[4814]: E1202 15:10:26.916261 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:10:34 crc kubenswrapper[4814]: I1202 15:10:34.145017 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:10:34 crc kubenswrapper[4814]: E1202 15:10:34.145750 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:10:40 crc kubenswrapper[4814]: I1202 15:10:40.144766 4814 scope.go:117] "RemoveContainer" containerID="7c83f286672515061aa5b03cd3052b75749877f06a2612c0c8f41eeba67c3c9d" Dec 02 15:10:40 crc kubenswrapper[4814]: E1202 15:10:40.145467 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:10:47 crc kubenswrapper[4814]: I1202 15:10:47.144525 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:10:47 crc kubenswrapper[4814]: E1202 15:10:47.145249 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:10:55 crc kubenswrapper[4814]: I1202 15:10:55.145013 4814 scope.go:117] "RemoveContainer" containerID="7c83f286672515061aa5b03cd3052b75749877f06a2612c0c8f41eeba67c3c9d" Dec 02 15:10:55 crc kubenswrapper[4814]: E1202 15:10:55.145876 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:11:02 crc kubenswrapper[4814]: I1202 15:11:02.145060 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:11:02 crc kubenswrapper[4814]: E1202 15:11:02.145719 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:11:09 crc kubenswrapper[4814]: I1202 15:11:09.145207 4814 scope.go:117] "RemoveContainer" containerID="7c83f286672515061aa5b03cd3052b75749877f06a2612c0c8f41eeba67c3c9d" Dec 02 15:11:09 crc kubenswrapper[4814]: E1202 15:11:09.146070 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:11:15 crc kubenswrapper[4814]: I1202 15:11:15.144263 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:11:15 crc kubenswrapper[4814]: E1202 15:11:15.145050 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:11:24 crc kubenswrapper[4814]: I1202 15:11:24.144246 4814 scope.go:117] "RemoveContainer" containerID="7c83f286672515061aa5b03cd3052b75749877f06a2612c0c8f41eeba67c3c9d" Dec 02 15:11:24 crc kubenswrapper[4814]: E1202 15:11:24.144881 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:11:29 crc kubenswrapper[4814]: I1202 15:11:29.144763 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:11:29 crc kubenswrapper[4814]: E1202 15:11:29.145528 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:11:38 crc kubenswrapper[4814]: I1202 15:11:38.150037 4814 scope.go:117] "RemoveContainer" containerID="7c83f286672515061aa5b03cd3052b75749877f06a2612c0c8f41eeba67c3c9d" Dec 02 15:11:38 crc kubenswrapper[4814]: E1202 15:11:38.151302 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(09c6e69b-7b35-4f61-8780-9fbab13a63e9)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.768940 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-phq62"] Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.784931 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-phq62"] Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.822162 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher1187-account-delete-rqmbn"] Dec 02 15:11:39 crc kubenswrapper[4814]: E1202 15:11:39.822571 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8499ccd-670d-4221-8ed6-96678e23daeb" containerName="registry-server" Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.822591 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8499ccd-670d-4221-8ed6-96678e23daeb" containerName="registry-server" Dec 02 15:11:39 crc kubenswrapper[4814]: E1202 15:11:39.822615 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8499ccd-670d-4221-8ed6-96678e23daeb" containerName="extract-content" Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.822623 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8499ccd-670d-4221-8ed6-96678e23daeb" containerName="extract-content" Dec 02 15:11:39 crc kubenswrapper[4814]: E1202 15:11:39.822639 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8499ccd-670d-4221-8ed6-96678e23daeb" containerName="extract-utilities" Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.822647 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8499ccd-670d-4221-8ed6-96678e23daeb" containerName="extract-utilities" Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.822832 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8499ccd-670d-4221-8ed6-96678e23daeb" containerName="registry-server" Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.823536 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.836662 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher1187-account-delete-rqmbn"] Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.867074 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.867341 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="63431300-f0ba-4366-bdbf-4252b564a4e1" containerName="watcher-applier" containerID="cri-o://9d9ee428568cbd5458748cf7dfb3ff124421ac5ebe7867156d59b45b81d54fc8" gracePeriod=30 Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.946878 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_09c6e69b-7b35-4f61-8780-9fbab13a63e9/watcher-decision-engine/5.log" Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.967446 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.967698 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="66d98ba1-f5e2-481e-938d-ec7aa533d11a" containerName="watcher-kuttl-api-log" containerID="cri-o://5d9e7239e8f6143369c5980c9c3741ae20f30331b097b13065f17c030f5378c7" gracePeriod=30 Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.967748 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="66d98ba1-f5e2-481e-938d-ec7aa533d11a" containerName="watcher-api" containerID="cri-o://bc910bb3a2e4e1e4463086be16e4f96829134909a33464680c13dd3f8ec0a4a5" gracePeriod=30 Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.982266 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.982521 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" containerName="watcher-kuttl-api-log" containerID="cri-o://e3d089f69a0c887f91751722fa587694689b982a9adcbcd96ed83f4eb2d6bd3f" gracePeriod=30 Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.982673 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" containerName="watcher-api" containerID="cri-o://735312b28f66c232bd138935b0f6f91cc2fdeccdc5bda6f489a38a8e236a0b77" gracePeriod=30 Dec 02 15:11:39 crc kubenswrapper[4814]: I1202 15:11:39.991030 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:39.997715 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s5qs\" (UniqueName: \"kubernetes.io/projected/15e7cbb8-7524-4b13-84c9-3b122ee7902b-kube-api-access-5s5qs\") pod \"watcher1187-account-delete-rqmbn\" (UID: \"15e7cbb8-7524-4b13-84c9-3b122ee7902b\") " pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:39.997838 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e7cbb8-7524-4b13-84c9-3b122ee7902b-operator-scripts\") pod \"watcher1187-account-delete-rqmbn\" (UID: \"15e7cbb8-7524-4b13-84c9-3b122ee7902b\") " pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.103894 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s5qs\" (UniqueName: \"kubernetes.io/projected/15e7cbb8-7524-4b13-84c9-3b122ee7902b-kube-api-access-5s5qs\") pod \"watcher1187-account-delete-rqmbn\" (UID: \"15e7cbb8-7524-4b13-84c9-3b122ee7902b\") " pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.104145 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e7cbb8-7524-4b13-84c9-3b122ee7902b-operator-scripts\") pod \"watcher1187-account-delete-rqmbn\" (UID: \"15e7cbb8-7524-4b13-84c9-3b122ee7902b\") " pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.104976 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e7cbb8-7524-4b13-84c9-3b122ee7902b-operator-scripts\") pod \"watcher1187-account-delete-rqmbn\" (UID: \"15e7cbb8-7524-4b13-84c9-3b122ee7902b\") " pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.127193 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s5qs\" (UniqueName: \"kubernetes.io/projected/15e7cbb8-7524-4b13-84c9-3b122ee7902b-kube-api-access-5s5qs\") pod \"watcher1187-account-delete-rqmbn\" (UID: \"15e7cbb8-7524-4b13-84c9-3b122ee7902b\") " pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.147327 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.161462 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d093e349-97d2-49a3-8767-3c6a694ecbc9" path="/var/lib/kubelet/pods/d093e349-97d2-49a3-8767-3c6a694ecbc9/volumes" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.264823 4814 generic.go:334] "Generic (PLEG): container finished" podID="66d98ba1-f5e2-481e-938d-ec7aa533d11a" containerID="5d9e7239e8f6143369c5980c9c3741ae20f30331b097b13065f17c030f5378c7" exitCode=143 Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.265134 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"66d98ba1-f5e2-481e-938d-ec7aa533d11a","Type":"ContainerDied","Data":"5d9e7239e8f6143369c5980c9c3741ae20f30331b097b13065f17c030f5378c7"} Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.267604 4814 generic.go:334] "Generic (PLEG): container finished" podID="064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" containerID="e3d089f69a0c887f91751722fa587694689b982a9adcbcd96ed83f4eb2d6bd3f" exitCode=143 Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.267627 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9","Type":"ContainerDied","Data":"e3d089f69a0c887f91751722fa587694689b982a9adcbcd96ed83f4eb2d6bd3f"} Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.453499 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.615320 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09c6e69b-7b35-4f61-8780-9fbab13a63e9-logs\") pod \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.615448 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-custom-prometheus-ca\") pod \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.615519 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-combined-ca-bundle\") pod \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.615545 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-config-data\") pod \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.615590 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grk4h\" (UniqueName: \"kubernetes.io/projected/09c6e69b-7b35-4f61-8780-9fbab13a63e9-kube-api-access-grk4h\") pod \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\" (UID: \"09c6e69b-7b35-4f61-8780-9fbab13a63e9\") " Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.619649 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09c6e69b-7b35-4f61-8780-9fbab13a63e9-logs" (OuterVolumeSpecName: "logs") pod "09c6e69b-7b35-4f61-8780-9fbab13a63e9" (UID: "09c6e69b-7b35-4f61-8780-9fbab13a63e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.620599 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09c6e69b-7b35-4f61-8780-9fbab13a63e9-kube-api-access-grk4h" (OuterVolumeSpecName: "kube-api-access-grk4h") pod "09c6e69b-7b35-4f61-8780-9fbab13a63e9" (UID: "09c6e69b-7b35-4f61-8780-9fbab13a63e9"). InnerVolumeSpecName "kube-api-access-grk4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.648402 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "09c6e69b-7b35-4f61-8780-9fbab13a63e9" (UID: "09c6e69b-7b35-4f61-8780-9fbab13a63e9"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.662080 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-config-data" (OuterVolumeSpecName: "config-data") pod "09c6e69b-7b35-4f61-8780-9fbab13a63e9" (UID: "09c6e69b-7b35-4f61-8780-9fbab13a63e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.662390 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09c6e69b-7b35-4f61-8780-9fbab13a63e9" (UID: "09c6e69b-7b35-4f61-8780-9fbab13a63e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.717449 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09c6e69b-7b35-4f61-8780-9fbab13a63e9-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.717491 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.717507 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.717520 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c6e69b-7b35-4f61-8780-9fbab13a63e9-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.717531 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grk4h\" (UniqueName: \"kubernetes.io/projected/09c6e69b-7b35-4f61-8780-9fbab13a63e9-kube-api-access-grk4h\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:40 crc kubenswrapper[4814]: I1202 15:11:40.809272 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher1187-account-delete-rqmbn"] Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.165330 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_09c6e69b-7b35-4f61-8780-9fbab13a63e9/watcher-decision-engine/5.log" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.278722 4814 generic.go:334] "Generic (PLEG): container finished" podID="63431300-f0ba-4366-bdbf-4252b564a4e1" containerID="9d9ee428568cbd5458748cf7dfb3ff124421ac5ebe7867156d59b45b81d54fc8" exitCode=0 Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.278799 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"63431300-f0ba-4366-bdbf-4252b564a4e1","Type":"ContainerDied","Data":"9d9ee428568cbd5458748cf7dfb3ff124421ac5ebe7867156d59b45b81d54fc8"} Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.280181 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" event={"ID":"15e7cbb8-7524-4b13-84c9-3b122ee7902b","Type":"ContainerStarted","Data":"a4a0e2f25265a1ed395c39a815ce8c76f93c8fc62d93b44a209de754523c480a"} Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.280222 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" event={"ID":"15e7cbb8-7524-4b13-84c9-3b122ee7902b","Type":"ContainerStarted","Data":"e59b030dfeceb0d90cf1e4bcb203928c364e1e73ffccedb34ab9dc954d5a9c29"} Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.282549 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.282531 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09c6e69b-7b35-4f61-8780-9fbab13a63e9","Type":"ContainerDied","Data":"e3aa3d2fd731ddebd920b7bd9811abc7a61b0ac4f914da4692fbee8ce3392d53"} Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.282777 4814 scope.go:117] "RemoveContainer" containerID="7c83f286672515061aa5b03cd3052b75749877f06a2612c0c8f41eeba67c3c9d" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.312930 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" podStartSLOduration=2.312906451 podStartE2EDuration="2.312906451s" podCreationTimestamp="2025-12-02 15:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:11:41.301665143 +0000 UTC m=+2563.426027377" watchObservedRunningTime="2025-12-02 15:11:41.312906451 +0000 UTC m=+2563.437268665" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.476127 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.488135 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.501136 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.537603 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr2lt\" (UniqueName: \"kubernetes.io/projected/63431300-f0ba-4366-bdbf-4252b564a4e1-kube-api-access-rr2lt\") pod \"63431300-f0ba-4366-bdbf-4252b564a4e1\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.537680 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63431300-f0ba-4366-bdbf-4252b564a4e1-logs\") pod \"63431300-f0ba-4366-bdbf-4252b564a4e1\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.537744 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63431300-f0ba-4366-bdbf-4252b564a4e1-combined-ca-bundle\") pod \"63431300-f0ba-4366-bdbf-4252b564a4e1\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.537791 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63431300-f0ba-4366-bdbf-4252b564a4e1-config-data\") pod \"63431300-f0ba-4366-bdbf-4252b564a4e1\" (UID: \"63431300-f0ba-4366-bdbf-4252b564a4e1\") " Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.538661 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63431300-f0ba-4366-bdbf-4252b564a4e1-logs" (OuterVolumeSpecName: "logs") pod "63431300-f0ba-4366-bdbf-4252b564a4e1" (UID: "63431300-f0ba-4366-bdbf-4252b564a4e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.543305 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63431300-f0ba-4366-bdbf-4252b564a4e1-kube-api-access-rr2lt" (OuterVolumeSpecName: "kube-api-access-rr2lt") pod "63431300-f0ba-4366-bdbf-4252b564a4e1" (UID: "63431300-f0ba-4366-bdbf-4252b564a4e1"). InnerVolumeSpecName "kube-api-access-rr2lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.567171 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63431300-f0ba-4366-bdbf-4252b564a4e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63431300-f0ba-4366-bdbf-4252b564a4e1" (UID: "63431300-f0ba-4366-bdbf-4252b564a4e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.586725 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63431300-f0ba-4366-bdbf-4252b564a4e1-config-data" (OuterVolumeSpecName: "config-data") pod "63431300-f0ba-4366-bdbf-4252b564a4e1" (UID: "63431300-f0ba-4366-bdbf-4252b564a4e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.638884 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr2lt\" (UniqueName: \"kubernetes.io/projected/63431300-f0ba-4366-bdbf-4252b564a4e1-kube-api-access-rr2lt\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.638911 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63431300-f0ba-4366-bdbf-4252b564a4e1-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.638920 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63431300-f0ba-4366-bdbf-4252b564a4e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.638930 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63431300-f0ba-4366-bdbf-4252b564a4e1-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.916290 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.944341 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="66d98ba1-f5e2-481e-938d-ec7aa533d11a" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.173:9322/\": dial tcp 10.217.0.173:9322: connect: connection refused" Dec 02 15:11:41 crc kubenswrapper[4814]: I1202 15:11:41.944368 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="66d98ba1-f5e2-481e-938d-ec7aa533d11a" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.173:9322/\": dial tcp 10.217.0.173:9322: connect: connection refused" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.045839 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-combined-ca-bundle\") pod \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.045915 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtxhb\" (UniqueName: \"kubernetes.io/projected/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-kube-api-access-qtxhb\") pod \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.045979 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-config-data\") pod \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.046054 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-logs\") pod \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.046120 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-custom-prometheus-ca\") pod \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\" (UID: \"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9\") " Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.046585 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-logs" (OuterVolumeSpecName: "logs") pod "064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" (UID: "064f90b0-5502-42bc-a1c1-b21c7ef7b9c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.054177 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-kube-api-access-qtxhb" (OuterVolumeSpecName: "kube-api-access-qtxhb") pod "064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" (UID: "064f90b0-5502-42bc-a1c1-b21c7ef7b9c9"). InnerVolumeSpecName "kube-api-access-qtxhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.075303 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" (UID: "064f90b0-5502-42bc-a1c1-b21c7ef7b9c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.078432 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" (UID: "064f90b0-5502-42bc-a1c1-b21c7ef7b9c9"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.098434 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-config-data" (OuterVolumeSpecName: "config-data") pod "064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" (UID: "064f90b0-5502-42bc-a1c1-b21c7ef7b9c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.147667 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:11:42 crc kubenswrapper[4814]: E1202 15:11:42.148130 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.156806 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.156917 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.156937 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtxhb\" (UniqueName: \"kubernetes.io/projected/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-kube-api-access-qtxhb\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.156958 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.156974 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.162003 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" path="/var/lib/kubelet/pods/09c6e69b-7b35-4f61-8780-9fbab13a63e9/volumes" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.264939 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.292789 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"63431300-f0ba-4366-bdbf-4252b564a4e1","Type":"ContainerDied","Data":"d616a76d1f493ac410b5a0516ec4083ef882851492a36f38779665a82e6194d5"} Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.293024 4814 scope.go:117] "RemoveContainer" containerID="9d9ee428568cbd5458748cf7dfb3ff124421ac5ebe7867156d59b45b81d54fc8" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.293194 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.298615 4814 generic.go:334] "Generic (PLEG): container finished" podID="15e7cbb8-7524-4b13-84c9-3b122ee7902b" containerID="a4a0e2f25265a1ed395c39a815ce8c76f93c8fc62d93b44a209de754523c480a" exitCode=0 Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.298786 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" event={"ID":"15e7cbb8-7524-4b13-84c9-3b122ee7902b","Type":"ContainerDied","Data":"a4a0e2f25265a1ed395c39a815ce8c76f93c8fc62d93b44a209de754523c480a"} Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.300729 4814 generic.go:334] "Generic (PLEG): container finished" podID="66d98ba1-f5e2-481e-938d-ec7aa533d11a" containerID="bc910bb3a2e4e1e4463086be16e4f96829134909a33464680c13dd3f8ec0a4a5" exitCode=0 Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.300882 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"66d98ba1-f5e2-481e-938d-ec7aa533d11a","Type":"ContainerDied","Data":"bc910bb3a2e4e1e4463086be16e4f96829134909a33464680c13dd3f8ec0a4a5"} Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.300975 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"66d98ba1-f5e2-481e-938d-ec7aa533d11a","Type":"ContainerDied","Data":"b104e1643e7e0aa29acdd43683ddfba264e1e582f55ab8af1c3933b29fe3b660"} Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.301120 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.303893 4814 generic.go:334] "Generic (PLEG): container finished" podID="064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" containerID="735312b28f66c232bd138935b0f6f91cc2fdeccdc5bda6f489a38a8e236a0b77" exitCode=0 Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.303995 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9","Type":"ContainerDied","Data":"735312b28f66c232bd138935b0f6f91cc2fdeccdc5bda6f489a38a8e236a0b77"} Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.304077 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"064f90b0-5502-42bc-a1c1-b21c7ef7b9c9","Type":"ContainerDied","Data":"f03c6c9445926da5acb44a3898958b4ea9b5a8d9e7d4c5cbad5b3c62cf33632e"} Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.304034 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.316848 4814 scope.go:117] "RemoveContainer" containerID="bc910bb3a2e4e1e4463086be16e4f96829134909a33464680c13dd3f8ec0a4a5" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.336757 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.344044 4814 scope.go:117] "RemoveContainer" containerID="5d9e7239e8f6143369c5980c9c3741ae20f30331b097b13065f17c030f5378c7" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.347932 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.364356 4814 scope.go:117] "RemoveContainer" containerID="bc910bb3a2e4e1e4463086be16e4f96829134909a33464680c13dd3f8ec0a4a5" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.365949 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:11:42 crc kubenswrapper[4814]: E1202 15:11:42.366905 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc910bb3a2e4e1e4463086be16e4f96829134909a33464680c13dd3f8ec0a4a5\": container with ID starting with bc910bb3a2e4e1e4463086be16e4f96829134909a33464680c13dd3f8ec0a4a5 not found: ID does not exist" containerID="bc910bb3a2e4e1e4463086be16e4f96829134909a33464680c13dd3f8ec0a4a5" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.366945 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc910bb3a2e4e1e4463086be16e4f96829134909a33464680c13dd3f8ec0a4a5"} err="failed to get container status \"bc910bb3a2e4e1e4463086be16e4f96829134909a33464680c13dd3f8ec0a4a5\": rpc error: code = NotFound desc = could not find container \"bc910bb3a2e4e1e4463086be16e4f96829134909a33464680c13dd3f8ec0a4a5\": container with ID starting with bc910bb3a2e4e1e4463086be16e4f96829134909a33464680c13dd3f8ec0a4a5 not found: ID does not exist" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.366969 4814 scope.go:117] "RemoveContainer" containerID="5d9e7239e8f6143369c5980c9c3741ae20f30331b097b13065f17c030f5378c7" Dec 02 15:11:42 crc kubenswrapper[4814]: E1202 15:11:42.367349 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d9e7239e8f6143369c5980c9c3741ae20f30331b097b13065f17c030f5378c7\": container with ID starting with 5d9e7239e8f6143369c5980c9c3741ae20f30331b097b13065f17c030f5378c7 not found: ID does not exist" containerID="5d9e7239e8f6143369c5980c9c3741ae20f30331b097b13065f17c030f5378c7" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.367376 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9e7239e8f6143369c5980c9c3741ae20f30331b097b13065f17c030f5378c7"} err="failed to get container status \"5d9e7239e8f6143369c5980c9c3741ae20f30331b097b13065f17c030f5378c7\": rpc error: code = NotFound desc = could not find container \"5d9e7239e8f6143369c5980c9c3741ae20f30331b097b13065f17c030f5378c7\": container with ID starting with 5d9e7239e8f6143369c5980c9c3741ae20f30331b097b13065f17c030f5378c7 not found: ID does not exist" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.367389 4814 scope.go:117] "RemoveContainer" containerID="735312b28f66c232bd138935b0f6f91cc2fdeccdc5bda6f489a38a8e236a0b77" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.374619 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.382823 4814 scope.go:117] "RemoveContainer" containerID="e3d089f69a0c887f91751722fa587694689b982a9adcbcd96ed83f4eb2d6bd3f" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.395862 4814 scope.go:117] "RemoveContainer" containerID="735312b28f66c232bd138935b0f6f91cc2fdeccdc5bda6f489a38a8e236a0b77" Dec 02 15:11:42 crc kubenswrapper[4814]: E1202 15:11:42.396203 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"735312b28f66c232bd138935b0f6f91cc2fdeccdc5bda6f489a38a8e236a0b77\": container with ID starting with 735312b28f66c232bd138935b0f6f91cc2fdeccdc5bda6f489a38a8e236a0b77 not found: ID does not exist" containerID="735312b28f66c232bd138935b0f6f91cc2fdeccdc5bda6f489a38a8e236a0b77" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.396259 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"735312b28f66c232bd138935b0f6f91cc2fdeccdc5bda6f489a38a8e236a0b77"} err="failed to get container status \"735312b28f66c232bd138935b0f6f91cc2fdeccdc5bda6f489a38a8e236a0b77\": rpc error: code = NotFound desc = could not find container \"735312b28f66c232bd138935b0f6f91cc2fdeccdc5bda6f489a38a8e236a0b77\": container with ID starting with 735312b28f66c232bd138935b0f6f91cc2fdeccdc5bda6f489a38a8e236a0b77 not found: ID does not exist" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.396286 4814 scope.go:117] "RemoveContainer" containerID="e3d089f69a0c887f91751722fa587694689b982a9adcbcd96ed83f4eb2d6bd3f" Dec 02 15:11:42 crc kubenswrapper[4814]: E1202 15:11:42.396525 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3d089f69a0c887f91751722fa587694689b982a9adcbcd96ed83f4eb2d6bd3f\": container with ID starting with e3d089f69a0c887f91751722fa587694689b982a9adcbcd96ed83f4eb2d6bd3f not found: ID does not exist" containerID="e3d089f69a0c887f91751722fa587694689b982a9adcbcd96ed83f4eb2d6bd3f" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.396552 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3d089f69a0c887f91751722fa587694689b982a9adcbcd96ed83f4eb2d6bd3f"} err="failed to get container status \"e3d089f69a0c887f91751722fa587694689b982a9adcbcd96ed83f4eb2d6bd3f\": rpc error: code = NotFound desc = could not find container \"e3d089f69a0c887f91751722fa587694689b982a9adcbcd96ed83f4eb2d6bd3f\": container with ID starting with e3d089f69a0c887f91751722fa587694689b982a9adcbcd96ed83f4eb2d6bd3f not found: ID does not exist" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.461185 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66d98ba1-f5e2-481e-938d-ec7aa533d11a-logs\") pod \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.461605 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrrd8\" (UniqueName: \"kubernetes.io/projected/66d98ba1-f5e2-481e-938d-ec7aa533d11a-kube-api-access-hrrd8\") pod \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.461700 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-custom-prometheus-ca\") pod \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.461728 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-config-data\") pod \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.461794 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-combined-ca-bundle\") pod \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\" (UID: \"66d98ba1-f5e2-481e-938d-ec7aa533d11a\") " Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.461813 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66d98ba1-f5e2-481e-938d-ec7aa533d11a-logs" (OuterVolumeSpecName: "logs") pod "66d98ba1-f5e2-481e-938d-ec7aa533d11a" (UID: "66d98ba1-f5e2-481e-938d-ec7aa533d11a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.462188 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66d98ba1-f5e2-481e-938d-ec7aa533d11a-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.466393 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66d98ba1-f5e2-481e-938d-ec7aa533d11a-kube-api-access-hrrd8" (OuterVolumeSpecName: "kube-api-access-hrrd8") pod "66d98ba1-f5e2-481e-938d-ec7aa533d11a" (UID: "66d98ba1-f5e2-481e-938d-ec7aa533d11a"). InnerVolumeSpecName "kube-api-access-hrrd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.488871 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66d98ba1-f5e2-481e-938d-ec7aa533d11a" (UID: "66d98ba1-f5e2-481e-938d-ec7aa533d11a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.492661 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "66d98ba1-f5e2-481e-938d-ec7aa533d11a" (UID: "66d98ba1-f5e2-481e-938d-ec7aa533d11a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.511316 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-config-data" (OuterVolumeSpecName: "config-data") pod "66d98ba1-f5e2-481e-938d-ec7aa533d11a" (UID: "66d98ba1-f5e2-481e-938d-ec7aa533d11a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.563520 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.563561 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrrd8\" (UniqueName: \"kubernetes.io/projected/66d98ba1-f5e2-481e-938d-ec7aa533d11a-kube-api-access-hrrd8\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.563578 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.563590 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d98ba1-f5e2-481e-938d-ec7aa533d11a-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.635332 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 15:11:42 crc kubenswrapper[4814]: I1202 15:11:42.642706 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 15:11:43 crc kubenswrapper[4814]: I1202 15:11:43.639675 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" Dec 02 15:11:43 crc kubenswrapper[4814]: I1202 15:11:43.781062 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e7cbb8-7524-4b13-84c9-3b122ee7902b-operator-scripts\") pod \"15e7cbb8-7524-4b13-84c9-3b122ee7902b\" (UID: \"15e7cbb8-7524-4b13-84c9-3b122ee7902b\") " Dec 02 15:11:43 crc kubenswrapper[4814]: I1202 15:11:43.781251 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s5qs\" (UniqueName: \"kubernetes.io/projected/15e7cbb8-7524-4b13-84c9-3b122ee7902b-kube-api-access-5s5qs\") pod \"15e7cbb8-7524-4b13-84c9-3b122ee7902b\" (UID: \"15e7cbb8-7524-4b13-84c9-3b122ee7902b\") " Dec 02 15:11:43 crc kubenswrapper[4814]: I1202 15:11:43.781694 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e7cbb8-7524-4b13-84c9-3b122ee7902b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15e7cbb8-7524-4b13-84c9-3b122ee7902b" (UID: "15e7cbb8-7524-4b13-84c9-3b122ee7902b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:11:43 crc kubenswrapper[4814]: I1202 15:11:43.784882 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e7cbb8-7524-4b13-84c9-3b122ee7902b-kube-api-access-5s5qs" (OuterVolumeSpecName: "kube-api-access-5s5qs") pod "15e7cbb8-7524-4b13-84c9-3b122ee7902b" (UID: "15e7cbb8-7524-4b13-84c9-3b122ee7902b"). InnerVolumeSpecName "kube-api-access-5s5qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:11:43 crc kubenswrapper[4814]: I1202 15:11:43.882695 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e7cbb8-7524-4b13-84c9-3b122ee7902b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:43 crc kubenswrapper[4814]: I1202 15:11:43.882726 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s5qs\" (UniqueName: \"kubernetes.io/projected/15e7cbb8-7524-4b13-84c9-3b122ee7902b-kube-api-access-5s5qs\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.155943 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" path="/var/lib/kubelet/pods/064f90b0-5502-42bc-a1c1-b21c7ef7b9c9/volumes" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.156528 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63431300-f0ba-4366-bdbf-4252b564a4e1" path="/var/lib/kubelet/pods/63431300-f0ba-4366-bdbf-4252b564a4e1/volumes" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.156999 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66d98ba1-f5e2-481e-938d-ec7aa533d11a" path="/var/lib/kubelet/pods/66d98ba1-f5e2-481e-938d-ec7aa533d11a/volumes" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.348904 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" event={"ID":"15e7cbb8-7524-4b13-84c9-3b122ee7902b","Type":"ContainerDied","Data":"e59b030dfeceb0d90cf1e4bcb203928c364e1e73ffccedb34ab9dc954d5a9c29"} Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.348946 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e59b030dfeceb0d90cf1e4bcb203928c364e1e73ffccedb34ab9dc954d5a9c29" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.348996 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1187-account-delete-rqmbn" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.852212 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-vjj4t"] Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.862731 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-vjj4t"] Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.872993 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher1187-account-delete-rqmbn"] Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.882188 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-1187-account-create-update-dhb89"] Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.888364 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-1187-account-create-update-dhb89"] Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.901851 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher1187-account-delete-rqmbn"] Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.941470 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-j2dll"] Dec 02 15:11:44 crc kubenswrapper[4814]: E1202 15:11:44.941824 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.941843 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: E1202 15:11:44.941855 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.941863 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: E1202 15:11:44.941873 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.941882 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: E1202 15:11:44.941893 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" containerName="watcher-kuttl-api-log" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.941901 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" containerName="watcher-kuttl-api-log" Dec 02 15:11:44 crc kubenswrapper[4814]: E1202 15:11:44.941915 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.941922 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: E1202 15:11:44.941933 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d98ba1-f5e2-481e-938d-ec7aa533d11a" containerName="watcher-api" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.941943 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d98ba1-f5e2-481e-938d-ec7aa533d11a" containerName="watcher-api" Dec 02 15:11:44 crc kubenswrapper[4814]: E1202 15:11:44.941956 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63431300-f0ba-4366-bdbf-4252b564a4e1" containerName="watcher-applier" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.941963 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="63431300-f0ba-4366-bdbf-4252b564a4e1" containerName="watcher-applier" Dec 02 15:11:44 crc kubenswrapper[4814]: E1202 15:11:44.941975 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.941983 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: E1202 15:11:44.941996 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d98ba1-f5e2-481e-938d-ec7aa533d11a" containerName="watcher-kuttl-api-log" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.942005 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d98ba1-f5e2-481e-938d-ec7aa533d11a" containerName="watcher-kuttl-api-log" Dec 02 15:11:44 crc kubenswrapper[4814]: E1202 15:11:44.942025 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" containerName="watcher-api" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.942032 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" containerName="watcher-api" Dec 02 15:11:44 crc kubenswrapper[4814]: E1202 15:11:44.942044 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.942052 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: E1202 15:11:44.942065 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e7cbb8-7524-4b13-84c9-3b122ee7902b" containerName="mariadb-account-delete" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.942075 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e7cbb8-7524-4b13-84c9-3b122ee7902b" containerName="mariadb-account-delete" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.942284 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e7cbb8-7524-4b13-84c9-3b122ee7902b" containerName="mariadb-account-delete" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.942306 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.942315 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.942325 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="66d98ba1-f5e2-481e-938d-ec7aa533d11a" containerName="watcher-api" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.942340 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.942349 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.942363 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="63431300-f0ba-4366-bdbf-4252b564a4e1" containerName="watcher-applier" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.942380 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" containerName="watcher-api" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.942393 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="064f90b0-5502-42bc-a1c1-b21c7ef7b9c9" containerName="watcher-kuttl-api-log" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.942406 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="66d98ba1-f5e2-481e-938d-ec7aa533d11a" containerName="watcher-kuttl-api-log" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.943106 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-j2dll" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.952525 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-j2dll"] Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.998598 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6jhn\" (UniqueName: \"kubernetes.io/projected/cca878c5-ec56-4e83-a0c5-c34d2bb25765-kube-api-access-j6jhn\") pod \"watcher-db-create-j2dll\" (UID: \"cca878c5-ec56-4e83-a0c5-c34d2bb25765\") " pod="watcher-kuttl-default/watcher-db-create-j2dll" Dec 02 15:11:44 crc kubenswrapper[4814]: I1202 15:11:44.998864 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca878c5-ec56-4e83-a0c5-c34d2bb25765-operator-scripts\") pod \"watcher-db-create-j2dll\" (UID: \"cca878c5-ec56-4e83-a0c5-c34d2bb25765\") " pod="watcher-kuttl-default/watcher-db-create-j2dll" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.042425 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-29ce-account-create-update-848m5"] Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.042894 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.042911 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c6e69b-7b35-4f61-8780-9fbab13a63e9" containerName="watcher-decision-engine" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.043433 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-29ce-account-create-update-848m5" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.047078 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.049742 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-29ce-account-create-update-848m5"] Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.100039 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c242a03-42d3-47e8-9226-511f118bcc63-operator-scripts\") pod \"watcher-29ce-account-create-update-848m5\" (UID: \"0c242a03-42d3-47e8-9226-511f118bcc63\") " pod="watcher-kuttl-default/watcher-29ce-account-create-update-848m5" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.100408 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6jhn\" (UniqueName: \"kubernetes.io/projected/cca878c5-ec56-4e83-a0c5-c34d2bb25765-kube-api-access-j6jhn\") pod \"watcher-db-create-j2dll\" (UID: \"cca878c5-ec56-4e83-a0c5-c34d2bb25765\") " pod="watcher-kuttl-default/watcher-db-create-j2dll" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.100511 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca878c5-ec56-4e83-a0c5-c34d2bb25765-operator-scripts\") pod \"watcher-db-create-j2dll\" (UID: \"cca878c5-ec56-4e83-a0c5-c34d2bb25765\") " pod="watcher-kuttl-default/watcher-db-create-j2dll" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.100609 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hbcg\" (UniqueName: \"kubernetes.io/projected/0c242a03-42d3-47e8-9226-511f118bcc63-kube-api-access-5hbcg\") pod \"watcher-29ce-account-create-update-848m5\" (UID: \"0c242a03-42d3-47e8-9226-511f118bcc63\") " pod="watcher-kuttl-default/watcher-29ce-account-create-update-848m5" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.101365 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca878c5-ec56-4e83-a0c5-c34d2bb25765-operator-scripts\") pod \"watcher-db-create-j2dll\" (UID: \"cca878c5-ec56-4e83-a0c5-c34d2bb25765\") " pod="watcher-kuttl-default/watcher-db-create-j2dll" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.128824 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6jhn\" (UniqueName: \"kubernetes.io/projected/cca878c5-ec56-4e83-a0c5-c34d2bb25765-kube-api-access-j6jhn\") pod \"watcher-db-create-j2dll\" (UID: \"cca878c5-ec56-4e83-a0c5-c34d2bb25765\") " pod="watcher-kuttl-default/watcher-db-create-j2dll" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.201473 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c242a03-42d3-47e8-9226-511f118bcc63-operator-scripts\") pod \"watcher-29ce-account-create-update-848m5\" (UID: \"0c242a03-42d3-47e8-9226-511f118bcc63\") " pod="watcher-kuttl-default/watcher-29ce-account-create-update-848m5" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.201582 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hbcg\" (UniqueName: \"kubernetes.io/projected/0c242a03-42d3-47e8-9226-511f118bcc63-kube-api-access-5hbcg\") pod \"watcher-29ce-account-create-update-848m5\" (UID: \"0c242a03-42d3-47e8-9226-511f118bcc63\") " pod="watcher-kuttl-default/watcher-29ce-account-create-update-848m5" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.202491 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c242a03-42d3-47e8-9226-511f118bcc63-operator-scripts\") pod \"watcher-29ce-account-create-update-848m5\" (UID: \"0c242a03-42d3-47e8-9226-511f118bcc63\") " pod="watcher-kuttl-default/watcher-29ce-account-create-update-848m5" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.221923 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hbcg\" (UniqueName: \"kubernetes.io/projected/0c242a03-42d3-47e8-9226-511f118bcc63-kube-api-access-5hbcg\") pod \"watcher-29ce-account-create-update-848m5\" (UID: \"0c242a03-42d3-47e8-9226-511f118bcc63\") " pod="watcher-kuttl-default/watcher-29ce-account-create-update-848m5" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.282194 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-j2dll" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.367430 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-29ce-account-create-update-848m5" Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.746008 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-j2dll"] Dec 02 15:11:45 crc kubenswrapper[4814]: W1202 15:11:45.753675 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcca878c5_ec56_4e83_a0c5_c34d2bb25765.slice/crio-520348605f83fffe0704b8b98c0f3d62e672bc653530d6dd70c0926ef08b2f19 WatchSource:0}: Error finding container 520348605f83fffe0704b8b98c0f3d62e672bc653530d6dd70c0926ef08b2f19: Status 404 returned error can't find the container with id 520348605f83fffe0704b8b98c0f3d62e672bc653530d6dd70c0926ef08b2f19 Dec 02 15:11:45 crc kubenswrapper[4814]: I1202 15:11:45.863867 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-29ce-account-create-update-848m5"] Dec 02 15:11:45 crc kubenswrapper[4814]: W1202 15:11:45.866757 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c242a03_42d3_47e8_9226_511f118bcc63.slice/crio-143ab6be20f4ed67844cba37ca62a84cb669343df1f5d7e8813448f52772dbbc WatchSource:0}: Error finding container 143ab6be20f4ed67844cba37ca62a84cb669343df1f5d7e8813448f52772dbbc: Status 404 returned error can't find the container with id 143ab6be20f4ed67844cba37ca62a84cb669343df1f5d7e8813448f52772dbbc Dec 02 15:11:46 crc kubenswrapper[4814]: I1202 15:11:46.156819 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e7cbb8-7524-4b13-84c9-3b122ee7902b" path="/var/lib/kubelet/pods/15e7cbb8-7524-4b13-84c9-3b122ee7902b/volumes" Dec 02 15:11:46 crc kubenswrapper[4814]: I1202 15:11:46.158170 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2099001a-40b2-4a49-a97d-04319d64a597" path="/var/lib/kubelet/pods/2099001a-40b2-4a49-a97d-04319d64a597/volumes" Dec 02 15:11:46 crc kubenswrapper[4814]: I1202 15:11:46.158803 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a2beaca-a78b-4a47-a670-87001e8cb226" path="/var/lib/kubelet/pods/2a2beaca-a78b-4a47-a670-87001e8cb226/volumes" Dec 02 15:11:46 crc kubenswrapper[4814]: I1202 15:11:46.364930 4814 generic.go:334] "Generic (PLEG): container finished" podID="cca878c5-ec56-4e83-a0c5-c34d2bb25765" containerID="5f06092addd2ffaa8a2e24e86ec80ccb1eb293eb7b02ae028dfc5cd7ad54655d" exitCode=0 Dec 02 15:11:46 crc kubenswrapper[4814]: I1202 15:11:46.365000 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-j2dll" event={"ID":"cca878c5-ec56-4e83-a0c5-c34d2bb25765","Type":"ContainerDied","Data":"5f06092addd2ffaa8a2e24e86ec80ccb1eb293eb7b02ae028dfc5cd7ad54655d"} Dec 02 15:11:46 crc kubenswrapper[4814]: I1202 15:11:46.365026 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-j2dll" event={"ID":"cca878c5-ec56-4e83-a0c5-c34d2bb25765","Type":"ContainerStarted","Data":"520348605f83fffe0704b8b98c0f3d62e672bc653530d6dd70c0926ef08b2f19"} Dec 02 15:11:46 crc kubenswrapper[4814]: I1202 15:11:46.366313 4814 generic.go:334] "Generic (PLEG): container finished" podID="0c242a03-42d3-47e8-9226-511f118bcc63" containerID="11e955693a5312f39523d5e270f973331738ac38bff4bd99a38b243c38c7f62e" exitCode=0 Dec 02 15:11:46 crc kubenswrapper[4814]: I1202 15:11:46.366346 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-29ce-account-create-update-848m5" event={"ID":"0c242a03-42d3-47e8-9226-511f118bcc63","Type":"ContainerDied","Data":"11e955693a5312f39523d5e270f973331738ac38bff4bd99a38b243c38c7f62e"} Dec 02 15:11:46 crc kubenswrapper[4814]: I1202 15:11:46.366397 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-29ce-account-create-update-848m5" event={"ID":"0c242a03-42d3-47e8-9226-511f118bcc63","Type":"ContainerStarted","Data":"143ab6be20f4ed67844cba37ca62a84cb669343df1f5d7e8813448f52772dbbc"} Dec 02 15:11:47 crc kubenswrapper[4814]: I1202 15:11:47.849671 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-j2dll" Dec 02 15:11:47 crc kubenswrapper[4814]: I1202 15:11:47.940125 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6jhn\" (UniqueName: \"kubernetes.io/projected/cca878c5-ec56-4e83-a0c5-c34d2bb25765-kube-api-access-j6jhn\") pod \"cca878c5-ec56-4e83-a0c5-c34d2bb25765\" (UID: \"cca878c5-ec56-4e83-a0c5-c34d2bb25765\") " Dec 02 15:11:47 crc kubenswrapper[4814]: I1202 15:11:47.940303 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca878c5-ec56-4e83-a0c5-c34d2bb25765-operator-scripts\") pod \"cca878c5-ec56-4e83-a0c5-c34d2bb25765\" (UID: \"cca878c5-ec56-4e83-a0c5-c34d2bb25765\") " Dec 02 15:11:47 crc kubenswrapper[4814]: I1202 15:11:47.941247 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cca878c5-ec56-4e83-a0c5-c34d2bb25765-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cca878c5-ec56-4e83-a0c5-c34d2bb25765" (UID: "cca878c5-ec56-4e83-a0c5-c34d2bb25765"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:11:47 crc kubenswrapper[4814]: I1202 15:11:47.946956 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cca878c5-ec56-4e83-a0c5-c34d2bb25765-kube-api-access-j6jhn" (OuterVolumeSpecName: "kube-api-access-j6jhn") pod "cca878c5-ec56-4e83-a0c5-c34d2bb25765" (UID: "cca878c5-ec56-4e83-a0c5-c34d2bb25765"). InnerVolumeSpecName "kube-api-access-j6jhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.019286 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-29ce-account-create-update-848m5" Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.052200 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca878c5-ec56-4e83-a0c5-c34d2bb25765-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.052258 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6jhn\" (UniqueName: \"kubernetes.io/projected/cca878c5-ec56-4e83-a0c5-c34d2bb25765-kube-api-access-j6jhn\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.153219 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hbcg\" (UniqueName: \"kubernetes.io/projected/0c242a03-42d3-47e8-9226-511f118bcc63-kube-api-access-5hbcg\") pod \"0c242a03-42d3-47e8-9226-511f118bcc63\" (UID: \"0c242a03-42d3-47e8-9226-511f118bcc63\") " Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.153631 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c242a03-42d3-47e8-9226-511f118bcc63-operator-scripts\") pod \"0c242a03-42d3-47e8-9226-511f118bcc63\" (UID: \"0c242a03-42d3-47e8-9226-511f118bcc63\") " Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.154478 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c242a03-42d3-47e8-9226-511f118bcc63-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c242a03-42d3-47e8-9226-511f118bcc63" (UID: "0c242a03-42d3-47e8-9226-511f118bcc63"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.160180 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c242a03-42d3-47e8-9226-511f118bcc63-kube-api-access-5hbcg" (OuterVolumeSpecName: "kube-api-access-5hbcg") pod "0c242a03-42d3-47e8-9226-511f118bcc63" (UID: "0c242a03-42d3-47e8-9226-511f118bcc63"). InnerVolumeSpecName "kube-api-access-5hbcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.255909 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hbcg\" (UniqueName: \"kubernetes.io/projected/0c242a03-42d3-47e8-9226-511f118bcc63-kube-api-access-5hbcg\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.255964 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c242a03-42d3-47e8-9226-511f118bcc63-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.381497 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-29ce-account-create-update-848m5" event={"ID":"0c242a03-42d3-47e8-9226-511f118bcc63","Type":"ContainerDied","Data":"143ab6be20f4ed67844cba37ca62a84cb669343df1f5d7e8813448f52772dbbc"} Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.381548 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="143ab6be20f4ed67844cba37ca62a84cb669343df1f5d7e8813448f52772dbbc" Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.381516 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-29ce-account-create-update-848m5" Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.382953 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-j2dll" event={"ID":"cca878c5-ec56-4e83-a0c5-c34d2bb25765","Type":"ContainerDied","Data":"520348605f83fffe0704b8b98c0f3d62e672bc653530d6dd70c0926ef08b2f19"} Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.382986 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="520348605f83fffe0704b8b98c0f3d62e672bc653530d6dd70c0926ef08b2f19" Dec 02 15:11:48 crc kubenswrapper[4814]: I1202 15:11:48.383195 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-j2dll" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.284772 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll"] Dec 02 15:11:50 crc kubenswrapper[4814]: E1202 15:11:50.285370 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c242a03-42d3-47e8-9226-511f118bcc63" containerName="mariadb-account-create-update" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.285385 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c242a03-42d3-47e8-9226-511f118bcc63" containerName="mariadb-account-create-update" Dec 02 15:11:50 crc kubenswrapper[4814]: E1202 15:11:50.285402 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca878c5-ec56-4e83-a0c5-c34d2bb25765" containerName="mariadb-database-create" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.285414 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca878c5-ec56-4e83-a0c5-c34d2bb25765" containerName="mariadb-database-create" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.285564 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c242a03-42d3-47e8-9226-511f118bcc63" containerName="mariadb-account-create-update" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.285603 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="cca878c5-ec56-4e83-a0c5-c34d2bb25765" containerName="mariadb-database-create" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.286625 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.289062 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-hq8nx" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.289326 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.293373 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll"] Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.386000 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f46hs\" (UniqueName: \"kubernetes.io/projected/48032874-84c0-4869-9ff8-86cf9e4ea9a6-kube-api-access-f46hs\") pod \"watcher-kuttl-db-sync-bk9ll\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.386120 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bk9ll\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.386153 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-config-data\") pod \"watcher-kuttl-db-sync-bk9ll\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.386332 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bk9ll\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.487538 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bk9ll\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.487603 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-config-data\") pod \"watcher-kuttl-db-sync-bk9ll\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.487632 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bk9ll\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.487715 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f46hs\" (UniqueName: \"kubernetes.io/projected/48032874-84c0-4869-9ff8-86cf9e4ea9a6-kube-api-access-f46hs\") pod \"watcher-kuttl-db-sync-bk9ll\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.491845 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-config-data\") pod \"watcher-kuttl-db-sync-bk9ll\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.492520 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bk9ll\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.492729 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bk9ll\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.515053 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f46hs\" (UniqueName: \"kubernetes.io/projected/48032874-84c0-4869-9ff8-86cf9e4ea9a6-kube-api-access-f46hs\") pod \"watcher-kuttl-db-sync-bk9ll\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:50 crc kubenswrapper[4814]: I1202 15:11:50.602767 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:51 crc kubenswrapper[4814]: I1202 15:11:51.112645 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll"] Dec 02 15:11:51 crc kubenswrapper[4814]: I1202 15:11:51.409158 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" event={"ID":"48032874-84c0-4869-9ff8-86cf9e4ea9a6","Type":"ContainerStarted","Data":"4e65909e45e53896a4950996eb412d6892e5690bc1364bb2a1bfc793a84317ed"} Dec 02 15:11:51 crc kubenswrapper[4814]: I1202 15:11:51.409203 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" event={"ID":"48032874-84c0-4869-9ff8-86cf9e4ea9a6","Type":"ContainerStarted","Data":"7f036b99a29c5ea0792bcda67bf1579ad951ab5e41c50c5bc2fd627064ae54c2"} Dec 02 15:11:51 crc kubenswrapper[4814]: I1202 15:11:51.432544 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" podStartSLOduration=1.432526725 podStartE2EDuration="1.432526725s" podCreationTimestamp="2025-12-02 15:11:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:11:51.427508251 +0000 UTC m=+2573.551870445" watchObservedRunningTime="2025-12-02 15:11:51.432526725 +0000 UTC m=+2573.556888919" Dec 02 15:11:53 crc kubenswrapper[4814]: I1202 15:11:53.430128 4814 generic.go:334] "Generic (PLEG): container finished" podID="48032874-84c0-4869-9ff8-86cf9e4ea9a6" containerID="4e65909e45e53896a4950996eb412d6892e5690bc1364bb2a1bfc793a84317ed" exitCode=0 Dec 02 15:11:53 crc kubenswrapper[4814]: I1202 15:11:53.430187 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" event={"ID":"48032874-84c0-4869-9ff8-86cf9e4ea9a6","Type":"ContainerDied","Data":"4e65909e45e53896a4950996eb412d6892e5690bc1364bb2a1bfc793a84317ed"} Dec 02 15:11:54 crc kubenswrapper[4814]: I1202 15:11:54.802404 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:54 crc kubenswrapper[4814]: I1202 15:11:54.962065 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-config-data\") pod \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " Dec 02 15:11:54 crc kubenswrapper[4814]: I1202 15:11:54.962211 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-combined-ca-bundle\") pod \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " Dec 02 15:11:54 crc kubenswrapper[4814]: I1202 15:11:54.962348 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f46hs\" (UniqueName: \"kubernetes.io/projected/48032874-84c0-4869-9ff8-86cf9e4ea9a6-kube-api-access-f46hs\") pod \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " Dec 02 15:11:54 crc kubenswrapper[4814]: I1202 15:11:54.962411 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-db-sync-config-data\") pod \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\" (UID: \"48032874-84c0-4869-9ff8-86cf9e4ea9a6\") " Dec 02 15:11:54 crc kubenswrapper[4814]: I1202 15:11:54.970072 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "48032874-84c0-4869-9ff8-86cf9e4ea9a6" (UID: "48032874-84c0-4869-9ff8-86cf9e4ea9a6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:11:54 crc kubenswrapper[4814]: I1202 15:11:54.970450 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48032874-84c0-4869-9ff8-86cf9e4ea9a6-kube-api-access-f46hs" (OuterVolumeSpecName: "kube-api-access-f46hs") pod "48032874-84c0-4869-9ff8-86cf9e4ea9a6" (UID: "48032874-84c0-4869-9ff8-86cf9e4ea9a6"). InnerVolumeSpecName "kube-api-access-f46hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.008809 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48032874-84c0-4869-9ff8-86cf9e4ea9a6" (UID: "48032874-84c0-4869-9ff8-86cf9e4ea9a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.035992 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-config-data" (OuterVolumeSpecName: "config-data") pod "48032874-84c0-4869-9ff8-86cf9e4ea9a6" (UID: "48032874-84c0-4869-9ff8-86cf9e4ea9a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.064774 4814 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.064826 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.064848 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48032874-84c0-4869-9ff8-86cf9e4ea9a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.064867 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f46hs\" (UniqueName: \"kubernetes.io/projected/48032874-84c0-4869-9ff8-86cf9e4ea9a6-kube-api-access-f46hs\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.449111 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.450885 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll" event={"ID":"48032874-84c0-4869-9ff8-86cf9e4ea9a6","Type":"ContainerDied","Data":"7f036b99a29c5ea0792bcda67bf1579ad951ab5e41c50c5bc2fd627064ae54c2"} Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.450940 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f036b99a29c5ea0792bcda67bf1579ad951ab5e41c50c5bc2fd627064ae54c2" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.763492 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:11:55 crc kubenswrapper[4814]: E1202 15:11:55.763883 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48032874-84c0-4869-9ff8-86cf9e4ea9a6" containerName="watcher-kuttl-db-sync" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.763902 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="48032874-84c0-4869-9ff8-86cf9e4ea9a6" containerName="watcher-kuttl-db-sync" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.764101 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="48032874-84c0-4869-9ff8-86cf9e4ea9a6" containerName="watcher-kuttl-db-sync" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.765190 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.771098 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.771215 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-hq8nx" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.777067 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b53b09c-4d33-4898-99a0-77742974a99a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.777554 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcz8z\" (UniqueName: \"kubernetes.io/projected/1b53b09c-4d33-4898-99a0-77742974a99a-kube-api-access-hcz8z\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.777608 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.777634 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.777657 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.778139 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.779256 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.789355 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.795990 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.810866 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.812153 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.816028 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.819434 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.858274 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.879093 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f4591b-3da5-4951-9288-57f89b217467-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.879153 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.879168 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02474f6-d699-4ee7-818a-9aac661e1c27-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.879190 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b53b09c-4d33-4898-99a0-77742974a99a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.879204 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a02474f6-d699-4ee7-818a-9aac661e1c27-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.879251 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.879279 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzd97\" (UniqueName: \"kubernetes.io/projected/e3f4591b-3da5-4951-9288-57f89b217467-kube-api-access-tzd97\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.879325 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02474f6-d699-4ee7-818a-9aac661e1c27-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.879345 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcz8z\" (UniqueName: \"kubernetes.io/projected/1b53b09c-4d33-4898-99a0-77742974a99a-kube-api-access-hcz8z\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.879365 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.879392 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.879410 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.879428 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.879774 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b53b09c-4d33-4898-99a0-77742974a99a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.880416 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw77r\" (UniqueName: \"kubernetes.io/projected/a02474f6-d699-4ee7-818a-9aac661e1c27-kube-api-access-fw77r\") pod \"watcher-kuttl-applier-0\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.883839 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.884661 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.898815 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcz8z\" (UniqueName: \"kubernetes.io/projected/1b53b09c-4d33-4898-99a0-77742974a99a-kube-api-access-hcz8z\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.902169 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.980969 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw77r\" (UniqueName: \"kubernetes.io/projected/a02474f6-d699-4ee7-818a-9aac661e1c27-kube-api-access-fw77r\") pod \"watcher-kuttl-applier-0\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.981268 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f4591b-3da5-4951-9288-57f89b217467-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.981377 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.981458 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02474f6-d699-4ee7-818a-9aac661e1c27-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.981545 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a02474f6-d699-4ee7-818a-9aac661e1c27-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.981654 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.981750 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzd97\" (UniqueName: \"kubernetes.io/projected/e3f4591b-3da5-4951-9288-57f89b217467-kube-api-access-tzd97\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.981845 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02474f6-d699-4ee7-818a-9aac661e1c27-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.981930 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.981791 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f4591b-3da5-4951-9288-57f89b217467-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.982700 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a02474f6-d699-4ee7-818a-9aac661e1c27-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.985878 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.985962 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02474f6-d699-4ee7-818a-9aac661e1c27-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.986004 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02474f6-d699-4ee7-818a-9aac661e1c27-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.988336 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:55 crc kubenswrapper[4814]: I1202 15:11:55.990147 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:56 crc kubenswrapper[4814]: I1202 15:11:56.001826 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw77r\" (UniqueName: \"kubernetes.io/projected/a02474f6-d699-4ee7-818a-9aac661e1c27-kube-api-access-fw77r\") pod \"watcher-kuttl-applier-0\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:56 crc kubenswrapper[4814]: I1202 15:11:56.002042 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzd97\" (UniqueName: \"kubernetes.io/projected/e3f4591b-3da5-4951-9288-57f89b217467-kube-api-access-tzd97\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:56 crc kubenswrapper[4814]: I1202 15:11:56.091999 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:56 crc kubenswrapper[4814]: I1202 15:11:56.110023 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:11:56 crc kubenswrapper[4814]: I1202 15:11:56.131906 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:11:56 crc kubenswrapper[4814]: I1202 15:11:56.145494 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:11:56 crc kubenswrapper[4814]: E1202 15:11:56.145692 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:11:56 crc kubenswrapper[4814]: I1202 15:11:56.560637 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:11:56 crc kubenswrapper[4814]: I1202 15:11:56.650594 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:11:56 crc kubenswrapper[4814]: I1202 15:11:56.658780 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:11:56 crc kubenswrapper[4814]: W1202 15:11:56.668613 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda02474f6_d699_4ee7_818a_9aac661e1c27.slice/crio-6acf4dfa98a0cc5b802135e84e3ede55c7c013e409f75451488a97886e4d11dd WatchSource:0}: Error finding container 6acf4dfa98a0cc5b802135e84e3ede55c7c013e409f75451488a97886e4d11dd: Status 404 returned error can't find the container with id 6acf4dfa98a0cc5b802135e84e3ede55c7c013e409f75451488a97886e4d11dd Dec 02 15:11:57 crc kubenswrapper[4814]: I1202 15:11:57.471732 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a02474f6-d699-4ee7-818a-9aac661e1c27","Type":"ContainerStarted","Data":"4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de"} Dec 02 15:11:57 crc kubenswrapper[4814]: I1202 15:11:57.472034 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a02474f6-d699-4ee7-818a-9aac661e1c27","Type":"ContainerStarted","Data":"6acf4dfa98a0cc5b802135e84e3ede55c7c013e409f75451488a97886e4d11dd"} Dec 02 15:11:57 crc kubenswrapper[4814]: I1202 15:11:57.478019 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1b53b09c-4d33-4898-99a0-77742974a99a","Type":"ContainerStarted","Data":"193d9ad0ef5ca7a2ddbc2be75c47a76950e389e9cf686ec3f9adf6eb363df8d3"} Dec 02 15:11:57 crc kubenswrapper[4814]: I1202 15:11:57.478073 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1b53b09c-4d33-4898-99a0-77742974a99a","Type":"ContainerStarted","Data":"5997e7fa0e3a33555ecfa195f5d88b5715fd236b4b697988843c1b0e1d4c7c7c"} Dec 02 15:11:57 crc kubenswrapper[4814]: I1202 15:11:57.478086 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1b53b09c-4d33-4898-99a0-77742974a99a","Type":"ContainerStarted","Data":"5192ec768be51ceb492585cadeffc74fa8de61256d00fea9d6681d8ce6ea0f59"} Dec 02 15:11:57 crc kubenswrapper[4814]: I1202 15:11:57.479365 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:57 crc kubenswrapper[4814]: I1202 15:11:57.518623 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.518601657 podStartE2EDuration="2.518601657s" podCreationTimestamp="2025-12-02 15:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:11:57.510527518 +0000 UTC m=+2579.634889722" watchObservedRunningTime="2025-12-02 15:11:57.518601657 +0000 UTC m=+2579.642963861" Dec 02 15:11:57 crc kubenswrapper[4814]: I1202 15:11:57.520469 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3f4591b-3da5-4951-9288-57f89b217467","Type":"ContainerStarted","Data":"1d8be242de9d13dec52696f91895aa9766e6b3c5982200bbea5497d2137d3851"} Dec 02 15:11:57 crc kubenswrapper[4814]: I1202 15:11:57.520516 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3f4591b-3da5-4951-9288-57f89b217467","Type":"ContainerStarted","Data":"adda0447597d58238b31c03e32a2fae74807a1c52b6c39513869c4041a273cc4"} Dec 02 15:11:57 crc kubenswrapper[4814]: I1202 15:11:57.572642 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.57262426 podStartE2EDuration="2.57262426s" podCreationTimestamp="2025-12-02 15:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:11:57.56857433 +0000 UTC m=+2579.692936534" watchObservedRunningTime="2025-12-02 15:11:57.57262426 +0000 UTC m=+2579.696986464" Dec 02 15:11:57 crc kubenswrapper[4814]: I1202 15:11:57.606437 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.606419663 podStartE2EDuration="2.606419663s" podCreationTimestamp="2025-12-02 15:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:11:57.605511331 +0000 UTC m=+2579.729873545" watchObservedRunningTime="2025-12-02 15:11:57.606419663 +0000 UTC m=+2579.730781857" Dec 02 15:11:58 crc kubenswrapper[4814]: I1202 15:11:58.665060 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/0.log" Dec 02 15:11:59 crc kubenswrapper[4814]: I1202 15:11:59.539216 4814 generic.go:334] "Generic (PLEG): container finished" podID="e3f4591b-3da5-4951-9288-57f89b217467" containerID="1d8be242de9d13dec52696f91895aa9766e6b3c5982200bbea5497d2137d3851" exitCode=1 Dec 02 15:11:59 crc kubenswrapper[4814]: I1202 15:11:59.539342 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 15:11:59 crc kubenswrapper[4814]: I1202 15:11:59.539783 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3f4591b-3da5-4951-9288-57f89b217467","Type":"ContainerDied","Data":"1d8be242de9d13dec52696f91895aa9766e6b3c5982200bbea5497d2137d3851"} Dec 02 15:11:59 crc kubenswrapper[4814]: I1202 15:11:59.540103 4814 scope.go:117] "RemoveContainer" containerID="1d8be242de9d13dec52696f91895aa9766e6b3c5982200bbea5497d2137d3851" Dec 02 15:11:59 crc kubenswrapper[4814]: I1202 15:11:59.753958 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:11:59 crc kubenswrapper[4814]: I1202 15:11:59.842362 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/0.log" Dec 02 15:12:00 crc kubenswrapper[4814]: I1202 15:12:00.547642 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3f4591b-3da5-4951-9288-57f89b217467","Type":"ContainerStarted","Data":"dd2d27a55f2a721efb00ae81603bfdcb274e3b8f05afd12df1a87b271ff73728"} Dec 02 15:12:01 crc kubenswrapper[4814]: I1202 15:12:01.027954 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:01 crc kubenswrapper[4814]: I1202 15:12:01.093191 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:12:01 crc kubenswrapper[4814]: I1202 15:12:01.133055 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:12:02 crc kubenswrapper[4814]: I1202 15:12:02.217038 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:03 crc kubenswrapper[4814]: I1202 15:12:03.411824 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:03 crc kubenswrapper[4814]: I1202 15:12:03.575937 4814 generic.go:334] "Generic (PLEG): container finished" podID="e3f4591b-3da5-4951-9288-57f89b217467" containerID="dd2d27a55f2a721efb00ae81603bfdcb274e3b8f05afd12df1a87b271ff73728" exitCode=1 Dec 02 15:12:03 crc kubenswrapper[4814]: I1202 15:12:03.576197 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3f4591b-3da5-4951-9288-57f89b217467","Type":"ContainerDied","Data":"dd2d27a55f2a721efb00ae81603bfdcb274e3b8f05afd12df1a87b271ff73728"} Dec 02 15:12:03 crc kubenswrapper[4814]: I1202 15:12:03.576355 4814 scope.go:117] "RemoveContainer" containerID="1d8be242de9d13dec52696f91895aa9766e6b3c5982200bbea5497d2137d3851" Dec 02 15:12:03 crc kubenswrapper[4814]: I1202 15:12:03.577108 4814 scope.go:117] "RemoveContainer" containerID="dd2d27a55f2a721efb00ae81603bfdcb274e3b8f05afd12df1a87b271ff73728" Dec 02 15:12:03 crc kubenswrapper[4814]: E1202 15:12:03.577458 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:12:04 crc kubenswrapper[4814]: I1202 15:12:04.582856 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:05 crc kubenswrapper[4814]: I1202 15:12:05.785506 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:06 crc kubenswrapper[4814]: I1202 15:12:06.093108 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:12:06 crc kubenswrapper[4814]: I1202 15:12:06.110931 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:12:06 crc kubenswrapper[4814]: I1202 15:12:06.110998 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:12:06 crc kubenswrapper[4814]: I1202 15:12:06.111728 4814 scope.go:117] "RemoveContainer" containerID="dd2d27a55f2a721efb00ae81603bfdcb274e3b8f05afd12df1a87b271ff73728" Dec 02 15:12:06 crc kubenswrapper[4814]: E1202 15:12:06.111987 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:12:06 crc kubenswrapper[4814]: I1202 15:12:06.132076 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:12:06 crc kubenswrapper[4814]: I1202 15:12:06.174770 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:12:06 crc kubenswrapper[4814]: I1202 15:12:06.227361 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:12:06 crc kubenswrapper[4814]: I1202 15:12:06.608368 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:12:06 crc kubenswrapper[4814]: I1202 15:12:06.645987 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:12:06 crc kubenswrapper[4814]: I1202 15:12:06.990053 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:08 crc kubenswrapper[4814]: I1202 15:12:08.186077 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:09 crc kubenswrapper[4814]: I1202 15:12:09.358782 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:10 crc kubenswrapper[4814]: I1202 15:12:10.587905 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:11 crc kubenswrapper[4814]: I1202 15:12:11.145780 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:12:11 crc kubenswrapper[4814]: E1202 15:12:11.146700 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:12:11 crc kubenswrapper[4814]: I1202 15:12:11.773759 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:12 crc kubenswrapper[4814]: I1202 15:12:12.950536 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:14 crc kubenswrapper[4814]: I1202 15:12:14.150675 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:15 crc kubenswrapper[4814]: I1202 15:12:15.308223 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:16 crc kubenswrapper[4814]: I1202 15:12:16.494083 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:17 crc kubenswrapper[4814]: I1202 15:12:17.687992 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:18 crc kubenswrapper[4814]: I1202 15:12:18.915865 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:20 crc kubenswrapper[4814]: I1202 15:12:20.085665 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/1.log" Dec 02 15:12:20 crc kubenswrapper[4814]: I1202 15:12:20.144603 4814 scope.go:117] "RemoveContainer" containerID="dd2d27a55f2a721efb00ae81603bfdcb274e3b8f05afd12df1a87b271ff73728" Dec 02 15:12:20 crc kubenswrapper[4814]: I1202 15:12:20.726718 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3f4591b-3da5-4951-9288-57f89b217467","Type":"ContainerStarted","Data":"99c891cc0efb59998731b105aca3e222bffb73a2e4cdf90e2ecb7ff4a93dc014"} Dec 02 15:12:21 crc kubenswrapper[4814]: I1202 15:12:21.265725 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:22 crc kubenswrapper[4814]: I1202 15:12:22.459586 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:23 crc kubenswrapper[4814]: I1202 15:12:23.681748 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:23 crc kubenswrapper[4814]: I1202 15:12:23.751954 4814 generic.go:334] "Generic (PLEG): container finished" podID="e3f4591b-3da5-4951-9288-57f89b217467" containerID="99c891cc0efb59998731b105aca3e222bffb73a2e4cdf90e2ecb7ff4a93dc014" exitCode=1 Dec 02 15:12:23 crc kubenswrapper[4814]: I1202 15:12:23.751999 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3f4591b-3da5-4951-9288-57f89b217467","Type":"ContainerDied","Data":"99c891cc0efb59998731b105aca3e222bffb73a2e4cdf90e2ecb7ff4a93dc014"} Dec 02 15:12:23 crc kubenswrapper[4814]: I1202 15:12:23.752039 4814 scope.go:117] "RemoveContainer" containerID="dd2d27a55f2a721efb00ae81603bfdcb274e3b8f05afd12df1a87b271ff73728" Dec 02 15:12:23 crc kubenswrapper[4814]: I1202 15:12:23.752697 4814 scope.go:117] "RemoveContainer" containerID="99c891cc0efb59998731b105aca3e222bffb73a2e4cdf90e2ecb7ff4a93dc014" Dec 02 15:12:23 crc kubenswrapper[4814]: E1202 15:12:23.752960 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:12:24 crc kubenswrapper[4814]: I1202 15:12:24.875481 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:25 crc kubenswrapper[4814]: I1202 15:12:25.144320 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:12:25 crc kubenswrapper[4814]: E1202 15:12:25.144539 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:12:26 crc kubenswrapper[4814]: I1202 15:12:26.097462 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:26 crc kubenswrapper[4814]: I1202 15:12:26.111654 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:12:26 crc kubenswrapper[4814]: I1202 15:12:26.111705 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:12:26 crc kubenswrapper[4814]: I1202 15:12:26.111717 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:12:26 crc kubenswrapper[4814]: I1202 15:12:26.111729 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:12:26 crc kubenswrapper[4814]: I1202 15:12:26.112519 4814 scope.go:117] "RemoveContainer" containerID="99c891cc0efb59998731b105aca3e222bffb73a2e4cdf90e2ecb7ff4a93dc014" Dec 02 15:12:26 crc kubenswrapper[4814]: E1202 15:12:26.112880 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:12:27 crc kubenswrapper[4814]: I1202 15:12:27.274958 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:28 crc kubenswrapper[4814]: I1202 15:12:28.492491 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:29 crc kubenswrapper[4814]: I1202 15:12:29.670757 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:30 crc kubenswrapper[4814]: I1202 15:12:30.865338 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:32 crc kubenswrapper[4814]: I1202 15:12:32.036515 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:33 crc kubenswrapper[4814]: I1202 15:12:33.253712 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:34 crc kubenswrapper[4814]: I1202 15:12:34.429386 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:35 crc kubenswrapper[4814]: I1202 15:12:35.608867 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:36 crc kubenswrapper[4814]: I1202 15:12:36.809279 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:37 crc kubenswrapper[4814]: I1202 15:12:37.145405 4814 scope.go:117] "RemoveContainer" containerID="99c891cc0efb59998731b105aca3e222bffb73a2e4cdf90e2ecb7ff4a93dc014" Dec 02 15:12:37 crc kubenswrapper[4814]: E1202 15:12:37.145828 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:12:37 crc kubenswrapper[4814]: I1202 15:12:37.970742 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:39 crc kubenswrapper[4814]: I1202 15:12:39.144572 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:12:39 crc kubenswrapper[4814]: E1202 15:12:39.145125 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:12:39 crc kubenswrapper[4814]: I1202 15:12:39.164623 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:40 crc kubenswrapper[4814]: I1202 15:12:40.364845 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:41 crc kubenswrapper[4814]: I1202 15:12:41.554305 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:42 crc kubenswrapper[4814]: I1202 15:12:42.763477 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:43 crc kubenswrapper[4814]: I1202 15:12:43.976442 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:45 crc kubenswrapper[4814]: I1202 15:12:45.172924 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:46 crc kubenswrapper[4814]: I1202 15:12:46.388006 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:47 crc kubenswrapper[4814]: I1202 15:12:47.639705 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:48 crc kubenswrapper[4814]: I1202 15:12:48.910069 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/2.log" Dec 02 15:12:49 crc kubenswrapper[4814]: I1202 15:12:49.145037 4814 scope.go:117] "RemoveContainer" containerID="99c891cc0efb59998731b105aca3e222bffb73a2e4cdf90e2ecb7ff4a93dc014" Dec 02 15:12:50 crc kubenswrapper[4814]: I1202 15:12:50.008036 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3f4591b-3da5-4951-9288-57f89b217467","Type":"ContainerStarted","Data":"f703e64a732fbfb8f065539aa60d483240d0c6145f26911d8b9f7d51d44302d1"} Dec 02 15:12:50 crc kubenswrapper[4814]: I1202 15:12:50.115750 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:12:51 crc kubenswrapper[4814]: I1202 15:12:51.273113 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:12:52 crc kubenswrapper[4814]: I1202 15:12:52.455076 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:12:53 crc kubenswrapper[4814]: I1202 15:12:53.033793 4814 generic.go:334] "Generic (PLEG): container finished" podID="e3f4591b-3da5-4951-9288-57f89b217467" containerID="f703e64a732fbfb8f065539aa60d483240d0c6145f26911d8b9f7d51d44302d1" exitCode=1 Dec 02 15:12:53 crc kubenswrapper[4814]: I1202 15:12:53.033856 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3f4591b-3da5-4951-9288-57f89b217467","Type":"ContainerDied","Data":"f703e64a732fbfb8f065539aa60d483240d0c6145f26911d8b9f7d51d44302d1"} Dec 02 15:12:53 crc kubenswrapper[4814]: I1202 15:12:53.033890 4814 scope.go:117] "RemoveContainer" containerID="99c891cc0efb59998731b105aca3e222bffb73a2e4cdf90e2ecb7ff4a93dc014" Dec 02 15:12:53 crc kubenswrapper[4814]: I1202 15:12:53.034462 4814 scope.go:117] "RemoveContainer" containerID="f703e64a732fbfb8f065539aa60d483240d0c6145f26911d8b9f7d51d44302d1" Dec 02 15:12:53 crc kubenswrapper[4814]: E1202 15:12:53.034807 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:12:53 crc kubenswrapper[4814]: I1202 15:12:53.145276 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:12:53 crc kubenswrapper[4814]: I1202 15:12:53.643344 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:12:54 crc kubenswrapper[4814]: I1202 15:12:54.068260 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"e117518c89c894ad13d9248d279bf1c0f883ff27f102eca10f935a8ccf6c6034"} Dec 02 15:12:54 crc kubenswrapper[4814]: I1202 15:12:54.851471 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:12:56 crc kubenswrapper[4814]: I1202 15:12:56.005668 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:12:56 crc kubenswrapper[4814]: I1202 15:12:56.110958 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:12:56 crc kubenswrapper[4814]: I1202 15:12:56.111068 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:12:56 crc kubenswrapper[4814]: I1202 15:12:56.111096 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:12:56 crc kubenswrapper[4814]: I1202 15:12:56.111119 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:12:56 crc kubenswrapper[4814]: I1202 15:12:56.112078 4814 scope.go:117] "RemoveContainer" containerID="f703e64a732fbfb8f065539aa60d483240d0c6145f26911d8b9f7d51d44302d1" Dec 02 15:12:56 crc kubenswrapper[4814]: E1202 15:12:56.112758 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:12:57 crc kubenswrapper[4814]: I1202 15:12:57.204174 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:12:58 crc kubenswrapper[4814]: I1202 15:12:58.393942 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:12:59 crc kubenswrapper[4814]: I1202 15:12:59.579038 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:00 crc kubenswrapper[4814]: I1202 15:13:00.771114 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:00 crc kubenswrapper[4814]: I1202 15:13:00.881388 4814 scope.go:117] "RemoveContainer" containerID="95eaf2f95f217a0235113536f00d791068bb54f296fb544d831a6ca3963e9465" Dec 02 15:13:00 crc kubenswrapper[4814]: I1202 15:13:00.908121 4814 scope.go:117] "RemoveContainer" containerID="179d76f0318f6e9b7d171d695cb1f7201367b3c945e142da28a699300d9bed5f" Dec 02 15:13:00 crc kubenswrapper[4814]: I1202 15:13:00.940187 4814 scope.go:117] "RemoveContainer" containerID="0f2f54ca6ae8ca58f2a482a882189d067d26a9956e089782e9656aa1ae3d4e6e" Dec 02 15:13:00 crc kubenswrapper[4814]: I1202 15:13:00.986691 4814 scope.go:117] "RemoveContainer" containerID="8883032e1c198be0705ad29f5b20e96d3607871250a36f31964bc72b16a3263a" Dec 02 15:13:02 crc kubenswrapper[4814]: I1202 15:13:02.001960 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:03 crc kubenswrapper[4814]: I1202 15:13:03.210295 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:04 crc kubenswrapper[4814]: I1202 15:13:04.452946 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:05 crc kubenswrapper[4814]: I1202 15:13:05.667560 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:06 crc kubenswrapper[4814]: I1202 15:13:06.851063 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:08 crc kubenswrapper[4814]: I1202 15:13:08.030256 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:09 crc kubenswrapper[4814]: I1202 15:13:09.244711 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:10 crc kubenswrapper[4814]: I1202 15:13:10.145644 4814 scope.go:117] "RemoveContainer" containerID="f703e64a732fbfb8f065539aa60d483240d0c6145f26911d8b9f7d51d44302d1" Dec 02 15:13:10 crc kubenswrapper[4814]: E1202 15:13:10.145919 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:13:10 crc kubenswrapper[4814]: I1202 15:13:10.425812 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:11 crc kubenswrapper[4814]: I1202 15:13:11.600659 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:12 crc kubenswrapper[4814]: I1202 15:13:12.795260 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:13 crc kubenswrapper[4814]: I1202 15:13:13.960198 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:15 crc kubenswrapper[4814]: I1202 15:13:15.130408 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:16 crc kubenswrapper[4814]: I1202 15:13:16.305704 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:17 crc kubenswrapper[4814]: I1202 15:13:17.484199 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:18 crc kubenswrapper[4814]: I1202 15:13:18.655871 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:18 crc kubenswrapper[4814]: I1202 15:13:18.730380 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qbdn7"] Dec 02 15:13:18 crc kubenswrapper[4814]: I1202 15:13:18.732055 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:18 crc kubenswrapper[4814]: I1202 15:13:18.751685 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qbdn7"] Dec 02 15:13:18 crc kubenswrapper[4814]: I1202 15:13:18.892057 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-catalog-content\") pod \"redhat-operators-qbdn7\" (UID: \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\") " pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:18 crc kubenswrapper[4814]: I1202 15:13:18.892125 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-utilities\") pod \"redhat-operators-qbdn7\" (UID: \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\") " pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:18 crc kubenswrapper[4814]: I1202 15:13:18.892181 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgcjv\" (UniqueName: \"kubernetes.io/projected/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-kube-api-access-bgcjv\") pod \"redhat-operators-qbdn7\" (UID: \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\") " pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:18 crc kubenswrapper[4814]: I1202 15:13:18.994893 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgcjv\" (UniqueName: \"kubernetes.io/projected/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-kube-api-access-bgcjv\") pod \"redhat-operators-qbdn7\" (UID: \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\") " pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:18 crc kubenswrapper[4814]: I1202 15:13:18.995119 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-catalog-content\") pod \"redhat-operators-qbdn7\" (UID: \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\") " pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:18 crc kubenswrapper[4814]: I1202 15:13:18.995185 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-utilities\") pod \"redhat-operators-qbdn7\" (UID: \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\") " pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:18 crc kubenswrapper[4814]: I1202 15:13:18.995669 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-catalog-content\") pod \"redhat-operators-qbdn7\" (UID: \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\") " pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:18 crc kubenswrapper[4814]: I1202 15:13:18.995685 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-utilities\") pod \"redhat-operators-qbdn7\" (UID: \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\") " pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:19 crc kubenswrapper[4814]: I1202 15:13:19.019374 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgcjv\" (UniqueName: \"kubernetes.io/projected/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-kube-api-access-bgcjv\") pod \"redhat-operators-qbdn7\" (UID: \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\") " pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:19 crc kubenswrapper[4814]: I1202 15:13:19.063355 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:19 crc kubenswrapper[4814]: I1202 15:13:19.509848 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qbdn7"] Dec 02 15:13:19 crc kubenswrapper[4814]: I1202 15:13:19.847417 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:20 crc kubenswrapper[4814]: I1202 15:13:20.324133 4814 generic.go:334] "Generic (PLEG): container finished" podID="b6ede7b8-f9a5-4260-8a55-e9f3bac18375" containerID="64ba4d2438da362e16dbc75cf996bce46ae6c9000af9d2b92cf259b3776b03bf" exitCode=0 Dec 02 15:13:20 crc kubenswrapper[4814]: I1202 15:13:20.324444 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbdn7" event={"ID":"b6ede7b8-f9a5-4260-8a55-e9f3bac18375","Type":"ContainerDied","Data":"64ba4d2438da362e16dbc75cf996bce46ae6c9000af9d2b92cf259b3776b03bf"} Dec 02 15:13:20 crc kubenswrapper[4814]: I1202 15:13:20.324476 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbdn7" event={"ID":"b6ede7b8-f9a5-4260-8a55-e9f3bac18375","Type":"ContainerStarted","Data":"7a9642edea006815f5fe3e5288caf760fe2a9ad7c3ef05c0bc9de8e7d5e38c50"} Dec 02 15:13:20 crc kubenswrapper[4814]: I1202 15:13:20.345987 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:13:22 crc kubenswrapper[4814]: I1202 15:13:22.010714 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:22 crc kubenswrapper[4814]: I1202 15:13:22.343846 4814 generic.go:334] "Generic (PLEG): container finished" podID="b6ede7b8-f9a5-4260-8a55-e9f3bac18375" containerID="798099eb87f4bdc3884c6926f22405c85583502332e893863c65850a57336264" exitCode=0 Dec 02 15:13:22 crc kubenswrapper[4814]: I1202 15:13:22.343906 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbdn7" event={"ID":"b6ede7b8-f9a5-4260-8a55-e9f3bac18375","Type":"ContainerDied","Data":"798099eb87f4bdc3884c6926f22405c85583502332e893863c65850a57336264"} Dec 02 15:13:23 crc kubenswrapper[4814]: I1202 15:13:23.145575 4814 scope.go:117] "RemoveContainer" containerID="f703e64a732fbfb8f065539aa60d483240d0c6145f26911d8b9f7d51d44302d1" Dec 02 15:13:23 crc kubenswrapper[4814]: E1202 15:13:23.146104 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:13:23 crc kubenswrapper[4814]: I1202 15:13:23.183562 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:23 crc kubenswrapper[4814]: I1202 15:13:23.354342 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbdn7" event={"ID":"b6ede7b8-f9a5-4260-8a55-e9f3bac18375","Type":"ContainerStarted","Data":"8f9f62b89238d929eba29919c9ae20f1893f1aaf8b065a69938e63472772bcd7"} Dec 02 15:13:23 crc kubenswrapper[4814]: I1202 15:13:23.378343 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qbdn7" podStartSLOduration=2.727703817 podStartE2EDuration="5.378319136s" podCreationTimestamp="2025-12-02 15:13:18 +0000 UTC" firstStartedPulling="2025-12-02 15:13:20.345765538 +0000 UTC m=+2662.470127732" lastFinishedPulling="2025-12-02 15:13:22.996380847 +0000 UTC m=+2665.120743051" observedRunningTime="2025-12-02 15:13:23.372718387 +0000 UTC m=+2665.497080591" watchObservedRunningTime="2025-12-02 15:13:23.378319136 +0000 UTC m=+2665.502681370" Dec 02 15:13:24 crc kubenswrapper[4814]: I1202 15:13:24.371525 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:25 crc kubenswrapper[4814]: I1202 15:13:25.581571 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:26 crc kubenswrapper[4814]: I1202 15:13:26.738336 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:27 crc kubenswrapper[4814]: I1202 15:13:27.943844 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:29 crc kubenswrapper[4814]: I1202 15:13:29.065212 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:29 crc kubenswrapper[4814]: I1202 15:13:29.065743 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:29 crc kubenswrapper[4814]: I1202 15:13:29.134374 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:29 crc kubenswrapper[4814]: I1202 15:13:29.137334 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:29 crc kubenswrapper[4814]: I1202 15:13:29.450968 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:30 crc kubenswrapper[4814]: I1202 15:13:30.352258 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:31 crc kubenswrapper[4814]: I1202 15:13:31.557216 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:32 crc kubenswrapper[4814]: I1202 15:13:32.708408 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qbdn7"] Dec 02 15:13:32 crc kubenswrapper[4814]: I1202 15:13:32.708993 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qbdn7" podUID="b6ede7b8-f9a5-4260-8a55-e9f3bac18375" containerName="registry-server" containerID="cri-o://8f9f62b89238d929eba29919c9ae20f1893f1aaf8b065a69938e63472772bcd7" gracePeriod=2 Dec 02 15:13:32 crc kubenswrapper[4814]: I1202 15:13:32.755970 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:34 crc kubenswrapper[4814]: I1202 15:13:34.000707 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:34 crc kubenswrapper[4814]: I1202 15:13:34.445371 4814 generic.go:334] "Generic (PLEG): container finished" podID="b6ede7b8-f9a5-4260-8a55-e9f3bac18375" containerID="8f9f62b89238d929eba29919c9ae20f1893f1aaf8b065a69938e63472772bcd7" exitCode=0 Dec 02 15:13:34 crc kubenswrapper[4814]: I1202 15:13:34.445418 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbdn7" event={"ID":"b6ede7b8-f9a5-4260-8a55-e9f3bac18375","Type":"ContainerDied","Data":"8f9f62b89238d929eba29919c9ae20f1893f1aaf8b065a69938e63472772bcd7"} Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.005160 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.106254 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-catalog-content\") pod \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\" (UID: \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\") " Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.106377 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgcjv\" (UniqueName: \"kubernetes.io/projected/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-kube-api-access-bgcjv\") pod \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\" (UID: \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\") " Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.106471 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-utilities\") pod \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\" (UID: \"b6ede7b8-f9a5-4260-8a55-e9f3bac18375\") " Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.108140 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-utilities" (OuterVolumeSpecName: "utilities") pod "b6ede7b8-f9a5-4260-8a55-e9f3bac18375" (UID: "b6ede7b8-f9a5-4260-8a55-e9f3bac18375"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.114196 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-kube-api-access-bgcjv" (OuterVolumeSpecName: "kube-api-access-bgcjv") pod "b6ede7b8-f9a5-4260-8a55-e9f3bac18375" (UID: "b6ede7b8-f9a5-4260-8a55-e9f3bac18375"). InnerVolumeSpecName "kube-api-access-bgcjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.209023 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgcjv\" (UniqueName: \"kubernetes.io/projected/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-kube-api-access-bgcjv\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.209525 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.220925 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.250026 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6ede7b8-f9a5-4260-8a55-e9f3bac18375" (UID: "b6ede7b8-f9a5-4260-8a55-e9f3bac18375"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.311076 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ede7b8-f9a5-4260-8a55-e9f3bac18375-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.456722 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbdn7" event={"ID":"b6ede7b8-f9a5-4260-8a55-e9f3bac18375","Type":"ContainerDied","Data":"7a9642edea006815f5fe3e5288caf760fe2a9ad7c3ef05c0bc9de8e7d5e38c50"} Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.456776 4814 scope.go:117] "RemoveContainer" containerID="8f9f62b89238d929eba29919c9ae20f1893f1aaf8b065a69938e63472772bcd7" Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.456844 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbdn7" Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.499623 4814 scope.go:117] "RemoveContainer" containerID="798099eb87f4bdc3884c6926f22405c85583502332e893863c65850a57336264" Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.517275 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qbdn7"] Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.529357 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qbdn7"] Dec 02 15:13:35 crc kubenswrapper[4814]: I1202 15:13:35.536700 4814 scope.go:117] "RemoveContainer" containerID="64ba4d2438da362e16dbc75cf996bce46ae6c9000af9d2b92cf259b3776b03bf" Dec 02 15:13:36 crc kubenswrapper[4814]: I1202 15:13:36.161647 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6ede7b8-f9a5-4260-8a55-e9f3bac18375" path="/var/lib/kubelet/pods/b6ede7b8-f9a5-4260-8a55-e9f3bac18375/volumes" Dec 02 15:13:36 crc kubenswrapper[4814]: I1202 15:13:36.390561 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:37 crc kubenswrapper[4814]: I1202 15:13:37.590463 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/3.log" Dec 02 15:13:38 crc kubenswrapper[4814]: I1202 15:13:38.154032 4814 scope.go:117] "RemoveContainer" containerID="f703e64a732fbfb8f065539aa60d483240d0c6145f26911d8b9f7d51d44302d1" Dec 02 15:13:38 crc kubenswrapper[4814]: I1202 15:13:38.485249 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3f4591b-3da5-4951-9288-57f89b217467","Type":"ContainerStarted","Data":"d8c5a79203b9c14a23f014c3c5defda97c084d1f7a00bb165b6b0887539f0c06"} Dec 02 15:13:38 crc kubenswrapper[4814]: I1202 15:13:38.775959 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:39 crc kubenswrapper[4814]: I1202 15:13:39.938049 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:41 crc kubenswrapper[4814]: I1202 15:13:41.188321 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:41 crc kubenswrapper[4814]: I1202 15:13:41.516090 4814 generic.go:334] "Generic (PLEG): container finished" podID="e3f4591b-3da5-4951-9288-57f89b217467" containerID="d8c5a79203b9c14a23f014c3c5defda97c084d1f7a00bb165b6b0887539f0c06" exitCode=1 Dec 02 15:13:41 crc kubenswrapper[4814]: I1202 15:13:41.516135 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3f4591b-3da5-4951-9288-57f89b217467","Type":"ContainerDied","Data":"d8c5a79203b9c14a23f014c3c5defda97c084d1f7a00bb165b6b0887539f0c06"} Dec 02 15:13:41 crc kubenswrapper[4814]: I1202 15:13:41.516171 4814 scope.go:117] "RemoveContainer" containerID="f703e64a732fbfb8f065539aa60d483240d0c6145f26911d8b9f7d51d44302d1" Dec 02 15:13:41 crc kubenswrapper[4814]: I1202 15:13:41.516988 4814 scope.go:117] "RemoveContainer" containerID="d8c5a79203b9c14a23f014c3c5defda97c084d1f7a00bb165b6b0887539f0c06" Dec 02 15:13:41 crc kubenswrapper[4814]: E1202 15:13:41.517416 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:13:42 crc kubenswrapper[4814]: I1202 15:13:42.377203 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:43 crc kubenswrapper[4814]: I1202 15:13:43.557584 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:44 crc kubenswrapper[4814]: I1202 15:13:44.739842 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:45 crc kubenswrapper[4814]: I1202 15:13:45.905293 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:46 crc kubenswrapper[4814]: I1202 15:13:46.111268 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:13:46 crc kubenswrapper[4814]: I1202 15:13:46.111324 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:13:46 crc kubenswrapper[4814]: I1202 15:13:46.111850 4814 scope.go:117] "RemoveContainer" containerID="d8c5a79203b9c14a23f014c3c5defda97c084d1f7a00bb165b6b0887539f0c06" Dec 02 15:13:46 crc kubenswrapper[4814]: E1202 15:13:46.112212 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:13:47 crc kubenswrapper[4814]: I1202 15:13:47.104054 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:48 crc kubenswrapper[4814]: I1202 15:13:48.271409 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:49 crc kubenswrapper[4814]: I1202 15:13:49.461208 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:50 crc kubenswrapper[4814]: I1202 15:13:50.661248 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.514900 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j467c"] Dec 02 15:13:51 crc kubenswrapper[4814]: E1202 15:13:51.515616 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ede7b8-f9a5-4260-8a55-e9f3bac18375" containerName="extract-content" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.515635 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ede7b8-f9a5-4260-8a55-e9f3bac18375" containerName="extract-content" Dec 02 15:13:51 crc kubenswrapper[4814]: E1202 15:13:51.515651 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ede7b8-f9a5-4260-8a55-e9f3bac18375" containerName="extract-utilities" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.515659 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ede7b8-f9a5-4260-8a55-e9f3bac18375" containerName="extract-utilities" Dec 02 15:13:51 crc kubenswrapper[4814]: E1202 15:13:51.515678 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ede7b8-f9a5-4260-8a55-e9f3bac18375" containerName="registry-server" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.515686 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ede7b8-f9a5-4260-8a55-e9f3bac18375" containerName="registry-server" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.515880 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6ede7b8-f9a5-4260-8a55-e9f3bac18375" containerName="registry-server" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.517243 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.541282 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j467c"] Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.584928 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49bc1b27-5336-446a-8cbe-a2886cde0a84-catalog-content\") pod \"certified-operators-j467c\" (UID: \"49bc1b27-5336-446a-8cbe-a2886cde0a84\") " pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.585049 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kvhd\" (UniqueName: \"kubernetes.io/projected/49bc1b27-5336-446a-8cbe-a2886cde0a84-kube-api-access-4kvhd\") pod \"certified-operators-j467c\" (UID: \"49bc1b27-5336-446a-8cbe-a2886cde0a84\") " pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.585083 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49bc1b27-5336-446a-8cbe-a2886cde0a84-utilities\") pod \"certified-operators-j467c\" (UID: \"49bc1b27-5336-446a-8cbe-a2886cde0a84\") " pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.685679 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kvhd\" (UniqueName: \"kubernetes.io/projected/49bc1b27-5336-446a-8cbe-a2886cde0a84-kube-api-access-4kvhd\") pod \"certified-operators-j467c\" (UID: \"49bc1b27-5336-446a-8cbe-a2886cde0a84\") " pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.685717 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49bc1b27-5336-446a-8cbe-a2886cde0a84-utilities\") pod \"certified-operators-j467c\" (UID: \"49bc1b27-5336-446a-8cbe-a2886cde0a84\") " pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.685779 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49bc1b27-5336-446a-8cbe-a2886cde0a84-catalog-content\") pod \"certified-operators-j467c\" (UID: \"49bc1b27-5336-446a-8cbe-a2886cde0a84\") " pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.686254 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49bc1b27-5336-446a-8cbe-a2886cde0a84-catalog-content\") pod \"certified-operators-j467c\" (UID: \"49bc1b27-5336-446a-8cbe-a2886cde0a84\") " pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.686437 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49bc1b27-5336-446a-8cbe-a2886cde0a84-utilities\") pod \"certified-operators-j467c\" (UID: \"49bc1b27-5336-446a-8cbe-a2886cde0a84\") " pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.706669 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kvhd\" (UniqueName: \"kubernetes.io/projected/49bc1b27-5336-446a-8cbe-a2886cde0a84-kube-api-access-4kvhd\") pod \"certified-operators-j467c\" (UID: \"49bc1b27-5336-446a-8cbe-a2886cde0a84\") " pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.841770 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:13:51 crc kubenswrapper[4814]: I1202 15:13:51.851407 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:52 crc kubenswrapper[4814]: I1202 15:13:52.310146 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j467c"] Dec 02 15:13:52 crc kubenswrapper[4814]: W1202 15:13:52.319905 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49bc1b27_5336_446a_8cbe_a2886cde0a84.slice/crio-56d562559c3576edd314752ff85f76b3143630afb6bdd00cf184182862d68283 WatchSource:0}: Error finding container 56d562559c3576edd314752ff85f76b3143630afb6bdd00cf184182862d68283: Status 404 returned error can't find the container with id 56d562559c3576edd314752ff85f76b3143630afb6bdd00cf184182862d68283 Dec 02 15:13:52 crc kubenswrapper[4814]: I1202 15:13:52.611287 4814 generic.go:334] "Generic (PLEG): container finished" podID="49bc1b27-5336-446a-8cbe-a2886cde0a84" containerID="a489f5d72c20394645da0620979701e50d3fdee114a6bd39b95280e9dae4aadf" exitCode=0 Dec 02 15:13:52 crc kubenswrapper[4814]: I1202 15:13:52.611330 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j467c" event={"ID":"49bc1b27-5336-446a-8cbe-a2886cde0a84","Type":"ContainerDied","Data":"a489f5d72c20394645da0620979701e50d3fdee114a6bd39b95280e9dae4aadf"} Dec 02 15:13:52 crc kubenswrapper[4814]: I1202 15:13:52.611364 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j467c" event={"ID":"49bc1b27-5336-446a-8cbe-a2886cde0a84","Type":"ContainerStarted","Data":"56d562559c3576edd314752ff85f76b3143630afb6bdd00cf184182862d68283"} Dec 02 15:13:53 crc kubenswrapper[4814]: I1202 15:13:53.044282 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:53 crc kubenswrapper[4814]: I1202 15:13:53.624159 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j467c" event={"ID":"49bc1b27-5336-446a-8cbe-a2886cde0a84","Type":"ContainerStarted","Data":"c65f7aee3fc9647f7cd0586b397135db1e7b2271a972deb4b9a2a10b48800861"} Dec 02 15:13:54 crc kubenswrapper[4814]: I1202 15:13:54.241219 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:54 crc kubenswrapper[4814]: I1202 15:13:54.633088 4814 generic.go:334] "Generic (PLEG): container finished" podID="49bc1b27-5336-446a-8cbe-a2886cde0a84" containerID="c65f7aee3fc9647f7cd0586b397135db1e7b2271a972deb4b9a2a10b48800861" exitCode=0 Dec 02 15:13:54 crc kubenswrapper[4814]: I1202 15:13:54.633127 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j467c" event={"ID":"49bc1b27-5336-446a-8cbe-a2886cde0a84","Type":"ContainerDied","Data":"c65f7aee3fc9647f7cd0586b397135db1e7b2271a972deb4b9a2a10b48800861"} Dec 02 15:13:55 crc kubenswrapper[4814]: I1202 15:13:55.449083 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:55 crc kubenswrapper[4814]: I1202 15:13:55.642702 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j467c" event={"ID":"49bc1b27-5336-446a-8cbe-a2886cde0a84","Type":"ContainerStarted","Data":"420a9a0999179243c7843cce86fffa3ea779d9ab26dee87ef1e45c0523587686"} Dec 02 15:13:55 crc kubenswrapper[4814]: I1202 15:13:55.667864 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j467c" podStartSLOduration=1.924571671 podStartE2EDuration="4.667838486s" podCreationTimestamp="2025-12-02 15:13:51 +0000 UTC" firstStartedPulling="2025-12-02 15:13:52.612747912 +0000 UTC m=+2694.737110116" lastFinishedPulling="2025-12-02 15:13:55.356014727 +0000 UTC m=+2697.480376931" observedRunningTime="2025-12-02 15:13:55.664392681 +0000 UTC m=+2697.788754895" watchObservedRunningTime="2025-12-02 15:13:55.667838486 +0000 UTC m=+2697.792200720" Dec 02 15:13:56 crc kubenswrapper[4814]: I1202 15:13:56.111551 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:13:56 crc kubenswrapper[4814]: I1202 15:13:56.112251 4814 scope.go:117] "RemoveContainer" containerID="d8c5a79203b9c14a23f014c3c5defda97c084d1f7a00bb165b6b0887539f0c06" Dec 02 15:13:56 crc kubenswrapper[4814]: I1202 15:13:56.112444 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:13:56 crc kubenswrapper[4814]: E1202 15:13:56.112526 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:13:56 crc kubenswrapper[4814]: I1202 15:13:56.633781 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:56 crc kubenswrapper[4814]: I1202 15:13:56.660339 4814 scope.go:117] "RemoveContainer" containerID="d8c5a79203b9c14a23f014c3c5defda97c084d1f7a00bb165b6b0887539f0c06" Dec 02 15:13:56 crc kubenswrapper[4814]: E1202 15:13:56.660647 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:13:57 crc kubenswrapper[4814]: I1202 15:13:57.822781 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:13:59 crc kubenswrapper[4814]: I1202 15:13:59.007662 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:00 crc kubenswrapper[4814]: I1202 15:14:00.187726 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:01 crc kubenswrapper[4814]: I1202 15:14:01.376753 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:01 crc kubenswrapper[4814]: I1202 15:14:01.842515 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:14:01 crc kubenswrapper[4814]: I1202 15:14:01.842589 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:14:01 crc kubenswrapper[4814]: I1202 15:14:01.916073 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:14:02 crc kubenswrapper[4814]: I1202 15:14:02.555419 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:02 crc kubenswrapper[4814]: I1202 15:14:02.809384 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:14:03 crc kubenswrapper[4814]: I1202 15:14:03.724137 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:04 crc kubenswrapper[4814]: I1202 15:14:04.929009 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:05 crc kubenswrapper[4814]: I1202 15:14:05.502953 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j467c"] Dec 02 15:14:05 crc kubenswrapper[4814]: I1202 15:14:05.756096 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j467c" podUID="49bc1b27-5336-446a-8cbe-a2886cde0a84" containerName="registry-server" containerID="cri-o://420a9a0999179243c7843cce86fffa3ea779d9ab26dee87ef1e45c0523587686" gracePeriod=2 Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.160271 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.182721 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.275083 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49bc1b27-5336-446a-8cbe-a2886cde0a84-catalog-content\") pod \"49bc1b27-5336-446a-8cbe-a2886cde0a84\" (UID: \"49bc1b27-5336-446a-8cbe-a2886cde0a84\") " Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.275171 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49bc1b27-5336-446a-8cbe-a2886cde0a84-utilities\") pod \"49bc1b27-5336-446a-8cbe-a2886cde0a84\" (UID: \"49bc1b27-5336-446a-8cbe-a2886cde0a84\") " Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.275245 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kvhd\" (UniqueName: \"kubernetes.io/projected/49bc1b27-5336-446a-8cbe-a2886cde0a84-kube-api-access-4kvhd\") pod \"49bc1b27-5336-446a-8cbe-a2886cde0a84\" (UID: \"49bc1b27-5336-446a-8cbe-a2886cde0a84\") " Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.276194 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49bc1b27-5336-446a-8cbe-a2886cde0a84-utilities" (OuterVolumeSpecName: "utilities") pod "49bc1b27-5336-446a-8cbe-a2886cde0a84" (UID: "49bc1b27-5336-446a-8cbe-a2886cde0a84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.282496 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49bc1b27-5336-446a-8cbe-a2886cde0a84-kube-api-access-4kvhd" (OuterVolumeSpecName: "kube-api-access-4kvhd") pod "49bc1b27-5336-446a-8cbe-a2886cde0a84" (UID: "49bc1b27-5336-446a-8cbe-a2886cde0a84"). InnerVolumeSpecName "kube-api-access-4kvhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.323991 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49bc1b27-5336-446a-8cbe-a2886cde0a84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49bc1b27-5336-446a-8cbe-a2886cde0a84" (UID: "49bc1b27-5336-446a-8cbe-a2886cde0a84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.377180 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49bc1b27-5336-446a-8cbe-a2886cde0a84-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.377219 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49bc1b27-5336-446a-8cbe-a2886cde0a84-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.377242 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kvhd\" (UniqueName: \"kubernetes.io/projected/49bc1b27-5336-446a-8cbe-a2886cde0a84-kube-api-access-4kvhd\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.767377 4814 generic.go:334] "Generic (PLEG): container finished" podID="49bc1b27-5336-446a-8cbe-a2886cde0a84" containerID="420a9a0999179243c7843cce86fffa3ea779d9ab26dee87ef1e45c0523587686" exitCode=0 Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.767415 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j467c" event={"ID":"49bc1b27-5336-446a-8cbe-a2886cde0a84","Type":"ContainerDied","Data":"420a9a0999179243c7843cce86fffa3ea779d9ab26dee87ef1e45c0523587686"} Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.767439 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j467c" event={"ID":"49bc1b27-5336-446a-8cbe-a2886cde0a84","Type":"ContainerDied","Data":"56d562559c3576edd314752ff85f76b3143630afb6bdd00cf184182862d68283"} Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.767456 4814 scope.go:117] "RemoveContainer" containerID="420a9a0999179243c7843cce86fffa3ea779d9ab26dee87ef1e45c0523587686" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.767529 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j467c" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.789482 4814 scope.go:117] "RemoveContainer" containerID="c65f7aee3fc9647f7cd0586b397135db1e7b2271a972deb4b9a2a10b48800861" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.809893 4814 scope.go:117] "RemoveContainer" containerID="a489f5d72c20394645da0620979701e50d3fdee114a6bd39b95280e9dae4aadf" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.822666 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j467c"] Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.829171 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j467c"] Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.857876 4814 scope.go:117] "RemoveContainer" containerID="420a9a0999179243c7843cce86fffa3ea779d9ab26dee87ef1e45c0523587686" Dec 02 15:14:06 crc kubenswrapper[4814]: E1202 15:14:06.858449 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"420a9a0999179243c7843cce86fffa3ea779d9ab26dee87ef1e45c0523587686\": container with ID starting with 420a9a0999179243c7843cce86fffa3ea779d9ab26dee87ef1e45c0523587686 not found: ID does not exist" containerID="420a9a0999179243c7843cce86fffa3ea779d9ab26dee87ef1e45c0523587686" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.858521 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"420a9a0999179243c7843cce86fffa3ea779d9ab26dee87ef1e45c0523587686"} err="failed to get container status \"420a9a0999179243c7843cce86fffa3ea779d9ab26dee87ef1e45c0523587686\": rpc error: code = NotFound desc = could not find container \"420a9a0999179243c7843cce86fffa3ea779d9ab26dee87ef1e45c0523587686\": container with ID starting with 420a9a0999179243c7843cce86fffa3ea779d9ab26dee87ef1e45c0523587686 not found: ID does not exist" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.858562 4814 scope.go:117] "RemoveContainer" containerID="c65f7aee3fc9647f7cd0586b397135db1e7b2271a972deb4b9a2a10b48800861" Dec 02 15:14:06 crc kubenswrapper[4814]: E1202 15:14:06.858917 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c65f7aee3fc9647f7cd0586b397135db1e7b2271a972deb4b9a2a10b48800861\": container with ID starting with c65f7aee3fc9647f7cd0586b397135db1e7b2271a972deb4b9a2a10b48800861 not found: ID does not exist" containerID="c65f7aee3fc9647f7cd0586b397135db1e7b2271a972deb4b9a2a10b48800861" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.859009 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c65f7aee3fc9647f7cd0586b397135db1e7b2271a972deb4b9a2a10b48800861"} err="failed to get container status \"c65f7aee3fc9647f7cd0586b397135db1e7b2271a972deb4b9a2a10b48800861\": rpc error: code = NotFound desc = could not find container \"c65f7aee3fc9647f7cd0586b397135db1e7b2271a972deb4b9a2a10b48800861\": container with ID starting with c65f7aee3fc9647f7cd0586b397135db1e7b2271a972deb4b9a2a10b48800861 not found: ID does not exist" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.859084 4814 scope.go:117] "RemoveContainer" containerID="a489f5d72c20394645da0620979701e50d3fdee114a6bd39b95280e9dae4aadf" Dec 02 15:14:06 crc kubenswrapper[4814]: E1202 15:14:06.859467 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a489f5d72c20394645da0620979701e50d3fdee114a6bd39b95280e9dae4aadf\": container with ID starting with a489f5d72c20394645da0620979701e50d3fdee114a6bd39b95280e9dae4aadf not found: ID does not exist" containerID="a489f5d72c20394645da0620979701e50d3fdee114a6bd39b95280e9dae4aadf" Dec 02 15:14:06 crc kubenswrapper[4814]: I1202 15:14:06.859511 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a489f5d72c20394645da0620979701e50d3fdee114a6bd39b95280e9dae4aadf"} err="failed to get container status \"a489f5d72c20394645da0620979701e50d3fdee114a6bd39b95280e9dae4aadf\": rpc error: code = NotFound desc = could not find container \"a489f5d72c20394645da0620979701e50d3fdee114a6bd39b95280e9dae4aadf\": container with ID starting with a489f5d72c20394645da0620979701e50d3fdee114a6bd39b95280e9dae4aadf not found: ID does not exist" Dec 02 15:14:07 crc kubenswrapper[4814]: I1202 15:14:07.359587 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:08 crc kubenswrapper[4814]: I1202 15:14:08.164453 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49bc1b27-5336-446a-8cbe-a2886cde0a84" path="/var/lib/kubelet/pods/49bc1b27-5336-446a-8cbe-a2886cde0a84/volumes" Dec 02 15:14:08 crc kubenswrapper[4814]: I1202 15:14:08.557472 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:09 crc kubenswrapper[4814]: I1202 15:14:09.767424 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:10 crc kubenswrapper[4814]: I1202 15:14:10.962982 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:11 crc kubenswrapper[4814]: I1202 15:14:11.145421 4814 scope.go:117] "RemoveContainer" containerID="d8c5a79203b9c14a23f014c3c5defda97c084d1f7a00bb165b6b0887539f0c06" Dec 02 15:14:11 crc kubenswrapper[4814]: E1202 15:14:11.145856 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:14:12 crc kubenswrapper[4814]: I1202 15:14:12.174524 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:13 crc kubenswrapper[4814]: I1202 15:14:13.374262 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:14 crc kubenswrapper[4814]: I1202 15:14:14.621262 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:15 crc kubenswrapper[4814]: I1202 15:14:15.804833 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:16 crc kubenswrapper[4814]: I1202 15:14:16.991190 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:18 crc kubenswrapper[4814]: I1202 15:14:18.168316 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:19 crc kubenswrapper[4814]: I1202 15:14:19.349995 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:20 crc kubenswrapper[4814]: I1202 15:14:20.548292 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:21 crc kubenswrapper[4814]: I1202 15:14:21.738796 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:22 crc kubenswrapper[4814]: I1202 15:14:22.931349 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:23 crc kubenswrapper[4814]: I1202 15:14:23.145152 4814 scope.go:117] "RemoveContainer" containerID="d8c5a79203b9c14a23f014c3c5defda97c084d1f7a00bb165b6b0887539f0c06" Dec 02 15:14:23 crc kubenswrapper[4814]: E1202 15:14:23.145691 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:14:24 crc kubenswrapper[4814]: I1202 15:14:24.090218 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:25 crc kubenswrapper[4814]: I1202 15:14:25.287895 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:26 crc kubenswrapper[4814]: I1202 15:14:26.483386 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:27 crc kubenswrapper[4814]: I1202 15:14:27.736757 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:28 crc kubenswrapper[4814]: I1202 15:14:28.928193 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:30 crc kubenswrapper[4814]: I1202 15:14:30.151524 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:31 crc kubenswrapper[4814]: I1202 15:14:31.349747 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:32 crc kubenswrapper[4814]: I1202 15:14:32.579932 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:33 crc kubenswrapper[4814]: I1202 15:14:33.797349 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:34 crc kubenswrapper[4814]: I1202 15:14:34.993079 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:35 crc kubenswrapper[4814]: I1202 15:14:35.145187 4814 scope.go:117] "RemoveContainer" containerID="d8c5a79203b9c14a23f014c3c5defda97c084d1f7a00bb165b6b0887539f0c06" Dec 02 15:14:35 crc kubenswrapper[4814]: E1202 15:14:35.145684 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:14:36 crc kubenswrapper[4814]: I1202 15:14:36.208832 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:37 crc kubenswrapper[4814]: I1202 15:14:37.428350 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:38 crc kubenswrapper[4814]: I1202 15:14:38.621833 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:39 crc kubenswrapper[4814]: I1202 15:14:39.808124 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:40 crc kubenswrapper[4814]: I1202 15:14:40.982720 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:42 crc kubenswrapper[4814]: I1202 15:14:42.171261 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:43 crc kubenswrapper[4814]: I1202 15:14:43.369595 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:44 crc kubenswrapper[4814]: I1202 15:14:44.628966 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:45 crc kubenswrapper[4814]: I1202 15:14:45.841215 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:47 crc kubenswrapper[4814]: I1202 15:14:47.048012 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:48 crc kubenswrapper[4814]: I1202 15:14:48.263059 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:49 crc kubenswrapper[4814]: I1202 15:14:49.446264 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:50 crc kubenswrapper[4814]: I1202 15:14:50.145367 4814 scope.go:117] "RemoveContainer" containerID="d8c5a79203b9c14a23f014c3c5defda97c084d1f7a00bb165b6b0887539f0c06" Dec 02 15:14:50 crc kubenswrapper[4814]: E1202 15:14:50.145912 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:14:50 crc kubenswrapper[4814]: I1202 15:14:50.623095 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:51 crc kubenswrapper[4814]: I1202 15:14:51.847356 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:53 crc kubenswrapper[4814]: I1202 15:14:53.037720 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:54 crc kubenswrapper[4814]: I1202 15:14:54.212403 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:55 crc kubenswrapper[4814]: I1202 15:14:55.402967 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:56 crc kubenswrapper[4814]: I1202 15:14:56.579457 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:57 crc kubenswrapper[4814]: I1202 15:14:57.767738 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:14:58 crc kubenswrapper[4814]: I1202 15:14:58.945971 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.114745 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.155105 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd"] Dec 02 15:15:00 crc kubenswrapper[4814]: E1202 15:15:00.155478 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bc1b27-5336-446a-8cbe-a2886cde0a84" containerName="registry-server" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.155500 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bc1b27-5336-446a-8cbe-a2886cde0a84" containerName="registry-server" Dec 02 15:15:00 crc kubenswrapper[4814]: E1202 15:15:00.155524 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bc1b27-5336-446a-8cbe-a2886cde0a84" containerName="extract-content" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.155532 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bc1b27-5336-446a-8cbe-a2886cde0a84" containerName="extract-content" Dec 02 15:15:00 crc kubenswrapper[4814]: E1202 15:15:00.155564 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bc1b27-5336-446a-8cbe-a2886cde0a84" containerName="extract-utilities" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.155572 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bc1b27-5336-446a-8cbe-a2886cde0a84" containerName="extract-utilities" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.155763 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="49bc1b27-5336-446a-8cbe-a2886cde0a84" containerName="registry-server" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.156483 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.158408 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.158523 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.164213 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd"] Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.329409 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4129cf95-8b81-49be-a6a3-3020b12aa580-config-volume\") pod \"collect-profiles-29411475-xcgwd\" (UID: \"4129cf95-8b81-49be-a6a3-3020b12aa580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.329748 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4129cf95-8b81-49be-a6a3-3020b12aa580-secret-volume\") pod \"collect-profiles-29411475-xcgwd\" (UID: \"4129cf95-8b81-49be-a6a3-3020b12aa580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.329937 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67dv8\" (UniqueName: \"kubernetes.io/projected/4129cf95-8b81-49be-a6a3-3020b12aa580-kube-api-access-67dv8\") pod \"collect-profiles-29411475-xcgwd\" (UID: \"4129cf95-8b81-49be-a6a3-3020b12aa580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.431777 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4129cf95-8b81-49be-a6a3-3020b12aa580-secret-volume\") pod \"collect-profiles-29411475-xcgwd\" (UID: \"4129cf95-8b81-49be-a6a3-3020b12aa580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.431959 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67dv8\" (UniqueName: \"kubernetes.io/projected/4129cf95-8b81-49be-a6a3-3020b12aa580-kube-api-access-67dv8\") pod \"collect-profiles-29411475-xcgwd\" (UID: \"4129cf95-8b81-49be-a6a3-3020b12aa580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.432081 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4129cf95-8b81-49be-a6a3-3020b12aa580-config-volume\") pod \"collect-profiles-29411475-xcgwd\" (UID: \"4129cf95-8b81-49be-a6a3-3020b12aa580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.434094 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4129cf95-8b81-49be-a6a3-3020b12aa580-config-volume\") pod \"collect-profiles-29411475-xcgwd\" (UID: \"4129cf95-8b81-49be-a6a3-3020b12aa580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.439839 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4129cf95-8b81-49be-a6a3-3020b12aa580-secret-volume\") pod \"collect-profiles-29411475-xcgwd\" (UID: \"4129cf95-8b81-49be-a6a3-3020b12aa580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.448935 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67dv8\" (UniqueName: \"kubernetes.io/projected/4129cf95-8b81-49be-a6a3-3020b12aa580-kube-api-access-67dv8\") pod \"collect-profiles-29411475-xcgwd\" (UID: \"4129cf95-8b81-49be-a6a3-3020b12aa580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.480595 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" Dec 02 15:15:00 crc kubenswrapper[4814]: I1202 15:15:00.936469 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd"] Dec 02 15:15:01 crc kubenswrapper[4814]: I1202 15:15:01.270458 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" event={"ID":"4129cf95-8b81-49be-a6a3-3020b12aa580","Type":"ContainerStarted","Data":"69eed6e4373adede1bbd82d7d4f124ce52d1bf7eac690e907b652080ee2f9abd"} Dec 02 15:15:01 crc kubenswrapper[4814]: I1202 15:15:01.270535 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" event={"ID":"4129cf95-8b81-49be-a6a3-3020b12aa580","Type":"ContainerStarted","Data":"a23f0182841e27995e79c871b075cb8dca21476dc4992799ed6d68eb9b939d60"} Dec 02 15:15:01 crc kubenswrapper[4814]: I1202 15:15:01.279520 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:15:01 crc kubenswrapper[4814]: I1202 15:15:01.295403 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" podStartSLOduration=1.295387911 podStartE2EDuration="1.295387911s" podCreationTimestamp="2025-12-02 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:15:01.290394568 +0000 UTC m=+2763.414756772" watchObservedRunningTime="2025-12-02 15:15:01.295387911 +0000 UTC m=+2763.419750115" Dec 02 15:15:02 crc kubenswrapper[4814]: I1202 15:15:02.280192 4814 generic.go:334] "Generic (PLEG): container finished" podID="4129cf95-8b81-49be-a6a3-3020b12aa580" containerID="69eed6e4373adede1bbd82d7d4f124ce52d1bf7eac690e907b652080ee2f9abd" exitCode=0 Dec 02 15:15:02 crc kubenswrapper[4814]: I1202 15:15:02.280266 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" event={"ID":"4129cf95-8b81-49be-a6a3-3020b12aa580","Type":"ContainerDied","Data":"69eed6e4373adede1bbd82d7d4f124ce52d1bf7eac690e907b652080ee2f9abd"} Dec 02 15:15:02 crc kubenswrapper[4814]: I1202 15:15:02.446371 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:15:03 crc kubenswrapper[4814]: I1202 15:15:03.625514 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:15:03 crc kubenswrapper[4814]: I1202 15:15:03.649316 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" Dec 02 15:15:03 crc kubenswrapper[4814]: I1202 15:15:03.783995 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67dv8\" (UniqueName: \"kubernetes.io/projected/4129cf95-8b81-49be-a6a3-3020b12aa580-kube-api-access-67dv8\") pod \"4129cf95-8b81-49be-a6a3-3020b12aa580\" (UID: \"4129cf95-8b81-49be-a6a3-3020b12aa580\") " Dec 02 15:15:03 crc kubenswrapper[4814]: I1202 15:15:03.784209 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4129cf95-8b81-49be-a6a3-3020b12aa580-secret-volume\") pod \"4129cf95-8b81-49be-a6a3-3020b12aa580\" (UID: \"4129cf95-8b81-49be-a6a3-3020b12aa580\") " Dec 02 15:15:03 crc kubenswrapper[4814]: I1202 15:15:03.784305 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4129cf95-8b81-49be-a6a3-3020b12aa580-config-volume\") pod \"4129cf95-8b81-49be-a6a3-3020b12aa580\" (UID: \"4129cf95-8b81-49be-a6a3-3020b12aa580\") " Dec 02 15:15:03 crc kubenswrapper[4814]: I1202 15:15:03.786344 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4129cf95-8b81-49be-a6a3-3020b12aa580-config-volume" (OuterVolumeSpecName: "config-volume") pod "4129cf95-8b81-49be-a6a3-3020b12aa580" (UID: "4129cf95-8b81-49be-a6a3-3020b12aa580"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:15:03 crc kubenswrapper[4814]: I1202 15:15:03.791438 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4129cf95-8b81-49be-a6a3-3020b12aa580-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4129cf95-8b81-49be-a6a3-3020b12aa580" (UID: "4129cf95-8b81-49be-a6a3-3020b12aa580"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:15:03 crc kubenswrapper[4814]: I1202 15:15:03.791518 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4129cf95-8b81-49be-a6a3-3020b12aa580-kube-api-access-67dv8" (OuterVolumeSpecName: "kube-api-access-67dv8") pod "4129cf95-8b81-49be-a6a3-3020b12aa580" (UID: "4129cf95-8b81-49be-a6a3-3020b12aa580"). InnerVolumeSpecName "kube-api-access-67dv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:15:03 crc kubenswrapper[4814]: I1202 15:15:03.885993 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67dv8\" (UniqueName: \"kubernetes.io/projected/4129cf95-8b81-49be-a6a3-3020b12aa580-kube-api-access-67dv8\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:03 crc kubenswrapper[4814]: I1202 15:15:03.886030 4814 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4129cf95-8b81-49be-a6a3-3020b12aa580-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:03 crc kubenswrapper[4814]: I1202 15:15:03.886039 4814 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4129cf95-8b81-49be-a6a3-3020b12aa580-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:04 crc kubenswrapper[4814]: I1202 15:15:04.300601 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" event={"ID":"4129cf95-8b81-49be-a6a3-3020b12aa580","Type":"ContainerDied","Data":"a23f0182841e27995e79c871b075cb8dca21476dc4992799ed6d68eb9b939d60"} Dec 02 15:15:04 crc kubenswrapper[4814]: I1202 15:15:04.300640 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a23f0182841e27995e79c871b075cb8dca21476dc4992799ed6d68eb9b939d60" Dec 02 15:15:04 crc kubenswrapper[4814]: I1202 15:15:04.300710 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-xcgwd" Dec 02 15:15:04 crc kubenswrapper[4814]: I1202 15:15:04.365310 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm"] Dec 02 15:15:04 crc kubenswrapper[4814]: I1202 15:15:04.372709 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-ckmjm"] Dec 02 15:15:04 crc kubenswrapper[4814]: I1202 15:15:04.842015 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:15:05 crc kubenswrapper[4814]: I1202 15:15:05.144832 4814 scope.go:117] "RemoveContainer" containerID="d8c5a79203b9c14a23f014c3c5defda97c084d1f7a00bb165b6b0887539f0c06" Dec 02 15:15:06 crc kubenswrapper[4814]: I1202 15:15:06.043744 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/4.log" Dec 02 15:15:06 crc kubenswrapper[4814]: I1202 15:15:06.158014 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19d86ed9-533c-4a11-a33b-74257f105a6e" path="/var/lib/kubelet/pods/19d86ed9-533c-4a11-a33b-74257f105a6e/volumes" Dec 02 15:15:06 crc kubenswrapper[4814]: I1202 15:15:06.320575 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3f4591b-3da5-4951-9288-57f89b217467","Type":"ContainerStarted","Data":"b8d0e5f0e516e604b2fdd60e2cea50201d14756eabe7831e8468b85bfbad49a1"} Dec 02 15:15:07 crc kubenswrapper[4814]: I1202 15:15:07.234788 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:08 crc kubenswrapper[4814]: I1202 15:15:08.340917 4814 generic.go:334] "Generic (PLEG): container finished" podID="e3f4591b-3da5-4951-9288-57f89b217467" containerID="b8d0e5f0e516e604b2fdd60e2cea50201d14756eabe7831e8468b85bfbad49a1" exitCode=1 Dec 02 15:15:08 crc kubenswrapper[4814]: I1202 15:15:08.340978 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3f4591b-3da5-4951-9288-57f89b217467","Type":"ContainerDied","Data":"b8d0e5f0e516e604b2fdd60e2cea50201d14756eabe7831e8468b85bfbad49a1"} Dec 02 15:15:08 crc kubenswrapper[4814]: I1202 15:15:08.341347 4814 scope.go:117] "RemoveContainer" containerID="d8c5a79203b9c14a23f014c3c5defda97c084d1f7a00bb165b6b0887539f0c06" Dec 02 15:15:08 crc kubenswrapper[4814]: I1202 15:15:08.342111 4814 scope.go:117] "RemoveContainer" containerID="b8d0e5f0e516e604b2fdd60e2cea50201d14756eabe7831e8468b85bfbad49a1" Dec 02 15:15:08 crc kubenswrapper[4814]: E1202 15:15:08.342415 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:15:08 crc kubenswrapper[4814]: I1202 15:15:08.445779 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:09 crc kubenswrapper[4814]: I1202 15:15:09.629903 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:10 crc kubenswrapper[4814]: I1202 15:15:10.797728 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:11 crc kubenswrapper[4814]: I1202 15:15:11.980829 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:13 crc kubenswrapper[4814]: I1202 15:15:13.186391 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:14 crc kubenswrapper[4814]: I1202 15:15:14.431512 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:15 crc kubenswrapper[4814]: I1202 15:15:15.606935 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:16 crc kubenswrapper[4814]: I1202 15:15:16.111611 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:15:16 crc kubenswrapper[4814]: I1202 15:15:16.111674 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:15:16 crc kubenswrapper[4814]: I1202 15:15:16.112218 4814 scope.go:117] "RemoveContainer" containerID="b8d0e5f0e516e604b2fdd60e2cea50201d14756eabe7831e8468b85bfbad49a1" Dec 02 15:15:16 crc kubenswrapper[4814]: E1202 15:15:16.112525 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:15:16 crc kubenswrapper[4814]: I1202 15:15:16.820454 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:18 crc kubenswrapper[4814]: I1202 15:15:18.048361 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:19 crc kubenswrapper[4814]: I1202 15:15:19.269312 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:20 crc kubenswrapper[4814]: I1202 15:15:20.650738 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:15:20 crc kubenswrapper[4814]: I1202 15:15:20.650829 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:15:20 crc kubenswrapper[4814]: I1202 15:15:20.918636 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:22 crc kubenswrapper[4814]: I1202 15:15:22.101077 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:23 crc kubenswrapper[4814]: I1202 15:15:23.304379 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:24 crc kubenswrapper[4814]: I1202 15:15:24.514895 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:25 crc kubenswrapper[4814]: I1202 15:15:25.695517 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:26 crc kubenswrapper[4814]: I1202 15:15:26.111011 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:15:26 crc kubenswrapper[4814]: I1202 15:15:26.111079 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:15:26 crc kubenswrapper[4814]: I1202 15:15:26.112021 4814 scope.go:117] "RemoveContainer" containerID="b8d0e5f0e516e604b2fdd60e2cea50201d14756eabe7831e8468b85bfbad49a1" Dec 02 15:15:26 crc kubenswrapper[4814]: E1202 15:15:26.112485 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:15:26 crc kubenswrapper[4814]: I1202 15:15:26.878610 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:28 crc kubenswrapper[4814]: I1202 15:15:28.065904 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:29 crc kubenswrapper[4814]: I1202 15:15:29.262583 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:30 crc kubenswrapper[4814]: I1202 15:15:30.430787 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:31 crc kubenswrapper[4814]: I1202 15:15:31.619485 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:32 crc kubenswrapper[4814]: I1202 15:15:32.800108 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:33 crc kubenswrapper[4814]: I1202 15:15:33.996219 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:35 crc kubenswrapper[4814]: I1202 15:15:35.198360 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:36 crc kubenswrapper[4814]: I1202 15:15:36.393089 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:37 crc kubenswrapper[4814]: I1202 15:15:37.145052 4814 scope.go:117] "RemoveContainer" containerID="b8d0e5f0e516e604b2fdd60e2cea50201d14756eabe7831e8468b85bfbad49a1" Dec 02 15:15:37 crc kubenswrapper[4814]: E1202 15:15:37.145326 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:15:37 crc kubenswrapper[4814]: I1202 15:15:37.596399 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:38 crc kubenswrapper[4814]: I1202 15:15:38.784500 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:39 crc kubenswrapper[4814]: I1202 15:15:39.954297 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:41 crc kubenswrapper[4814]: I1202 15:15:41.149106 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:42 crc kubenswrapper[4814]: I1202 15:15:42.348888 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:43 crc kubenswrapper[4814]: I1202 15:15:43.546035 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:44 crc kubenswrapper[4814]: I1202 15:15:44.746907 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:45 crc kubenswrapper[4814]: I1202 15:15:45.951578 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:47 crc kubenswrapper[4814]: I1202 15:15:47.147571 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:48 crc kubenswrapper[4814]: I1202 15:15:48.151568 4814 scope.go:117] "RemoveContainer" containerID="b8d0e5f0e516e604b2fdd60e2cea50201d14756eabe7831e8468b85bfbad49a1" Dec 02 15:15:48 crc kubenswrapper[4814]: E1202 15:15:48.151832 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:15:48 crc kubenswrapper[4814]: I1202 15:15:48.306075 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:49 crc kubenswrapper[4814]: I1202 15:15:49.497189 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:50 crc kubenswrapper[4814]: I1202 15:15:50.650796 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:15:50 crc kubenswrapper[4814]: I1202 15:15:50.650880 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:15:50 crc kubenswrapper[4814]: I1202 15:15:50.708720 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:51 crc kubenswrapper[4814]: I1202 15:15:51.879073 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:53 crc kubenswrapper[4814]: I1202 15:15:53.068061 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:54 crc kubenswrapper[4814]: I1202 15:15:54.248179 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:55 crc kubenswrapper[4814]: I1202 15:15:55.458824 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:56 crc kubenswrapper[4814]: I1202 15:15:56.625594 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:57 crc kubenswrapper[4814]: I1202 15:15:57.834106 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:15:59 crc kubenswrapper[4814]: I1202 15:15:59.017335 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:00 crc kubenswrapper[4814]: I1202 15:16:00.229960 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:01 crc kubenswrapper[4814]: I1202 15:16:01.144906 4814 scope.go:117] "RemoveContainer" containerID="b8d0e5f0e516e604b2fdd60e2cea50201d14756eabe7831e8468b85bfbad49a1" Dec 02 15:16:01 crc kubenswrapper[4814]: E1202 15:16:01.145164 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:16:01 crc kubenswrapper[4814]: I1202 15:16:01.212433 4814 scope.go:117] "RemoveContainer" containerID="375b63938bea4762a8531d7c2edf4600cbc234849781e72035dff0382cab8058" Dec 02 15:16:01 crc kubenswrapper[4814]: I1202 15:16:01.411202 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:02 crc kubenswrapper[4814]: I1202 15:16:02.604381 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:03 crc kubenswrapper[4814]: I1202 15:16:03.791356 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:04 crc kubenswrapper[4814]: I1202 15:16:04.990509 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:06 crc kubenswrapper[4814]: I1202 15:16:06.199175 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:07 crc kubenswrapper[4814]: I1202 15:16:07.443533 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:08 crc kubenswrapper[4814]: I1202 15:16:08.612476 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:09 crc kubenswrapper[4814]: I1202 15:16:09.772088 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:10 crc kubenswrapper[4814]: I1202 15:16:10.962451 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:12 crc kubenswrapper[4814]: I1202 15:16:12.207759 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:13 crc kubenswrapper[4814]: I1202 15:16:13.144747 4814 scope.go:117] "RemoveContainer" containerID="b8d0e5f0e516e604b2fdd60e2cea50201d14756eabe7831e8468b85bfbad49a1" Dec 02 15:16:13 crc kubenswrapper[4814]: E1202 15:16:13.145307 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:16:13 crc kubenswrapper[4814]: I1202 15:16:13.438640 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:14 crc kubenswrapper[4814]: I1202 15:16:14.700998 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:15 crc kubenswrapper[4814]: I1202 15:16:15.909916 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:17 crc kubenswrapper[4814]: I1202 15:16:17.111546 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:18 crc kubenswrapper[4814]: I1202 15:16:18.323920 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:19 crc kubenswrapper[4814]: I1202 15:16:19.519872 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:20 crc kubenswrapper[4814]: I1202 15:16:20.650659 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:16:20 crc kubenswrapper[4814]: I1202 15:16:20.650926 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:16:20 crc kubenswrapper[4814]: I1202 15:16:20.651166 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 15:16:20 crc kubenswrapper[4814]: I1202 15:16:20.651788 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e117518c89c894ad13d9248d279bf1c0f883ff27f102eca10f935a8ccf6c6034"} pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:16:20 crc kubenswrapper[4814]: I1202 15:16:20.651840 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" containerID="cri-o://e117518c89c894ad13d9248d279bf1c0f883ff27f102eca10f935a8ccf6c6034" gracePeriod=600 Dec 02 15:16:20 crc kubenswrapper[4814]: I1202 15:16:20.691514 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:21 crc kubenswrapper[4814]: I1202 15:16:21.046502 4814 generic.go:334] "Generic (PLEG): container finished" podID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerID="e117518c89c894ad13d9248d279bf1c0f883ff27f102eca10f935a8ccf6c6034" exitCode=0 Dec 02 15:16:21 crc kubenswrapper[4814]: I1202 15:16:21.046539 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerDied","Data":"e117518c89c894ad13d9248d279bf1c0f883ff27f102eca10f935a8ccf6c6034"} Dec 02 15:16:21 crc kubenswrapper[4814]: I1202 15:16:21.046565 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29"} Dec 02 15:16:21 crc kubenswrapper[4814]: I1202 15:16:21.046581 4814 scope.go:117] "RemoveContainer" containerID="aaaf3aa9e1b374d1335f4f4e136b7558a53a6eb4f55a903afcc7a051856e4584" Dec 02 15:16:21 crc kubenswrapper[4814]: I1202 15:16:21.880245 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:23 crc kubenswrapper[4814]: I1202 15:16:23.044715 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:24 crc kubenswrapper[4814]: I1202 15:16:24.144449 4814 scope.go:117] "RemoveContainer" containerID="b8d0e5f0e516e604b2fdd60e2cea50201d14756eabe7831e8468b85bfbad49a1" Dec 02 15:16:24 crc kubenswrapper[4814]: E1202 15:16:24.144675 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:16:24 crc kubenswrapper[4814]: I1202 15:16:24.218082 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:25 crc kubenswrapper[4814]: I1202 15:16:25.420004 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:26 crc kubenswrapper[4814]: I1202 15:16:26.604656 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:27 crc kubenswrapper[4814]: I1202 15:16:27.796206 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:28 crc kubenswrapper[4814]: I1202 15:16:28.960409 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:30 crc kubenswrapper[4814]: I1202 15:16:30.148662 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:31 crc kubenswrapper[4814]: I1202 15:16:31.327635 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:32 crc kubenswrapper[4814]: I1202 15:16:32.492627 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:33 crc kubenswrapper[4814]: I1202 15:16:33.657851 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:34 crc kubenswrapper[4814]: I1202 15:16:34.919007 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:36 crc kubenswrapper[4814]: I1202 15:16:36.099223 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:37 crc kubenswrapper[4814]: I1202 15:16:37.290644 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:38 crc kubenswrapper[4814]: I1202 15:16:38.485271 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:39 crc kubenswrapper[4814]: I1202 15:16:39.144556 4814 scope.go:117] "RemoveContainer" containerID="b8d0e5f0e516e604b2fdd60e2cea50201d14756eabe7831e8468b85bfbad49a1" Dec 02 15:16:39 crc kubenswrapper[4814]: E1202 15:16:39.144828 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(e3f4591b-3da5-4951-9288-57f89b217467)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3f4591b-3da5-4951-9288-57f89b217467" Dec 02 15:16:39 crc kubenswrapper[4814]: I1202 15:16:39.684248 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3f4591b-3da5-4951-9288-57f89b217467/watcher-decision-engine/5.log" Dec 02 15:16:40 crc kubenswrapper[4814]: I1202 15:16:40.851870 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll"] Dec 02 15:16:40 crc kubenswrapper[4814]: I1202 15:16:40.863887 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bk9ll"] Dec 02 15:16:40 crc kubenswrapper[4814]: I1202 15:16:40.922808 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher29ce-account-delete-9ftns"] Dec 02 15:16:40 crc kubenswrapper[4814]: E1202 15:16:40.923284 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4129cf95-8b81-49be-a6a3-3020b12aa580" containerName="collect-profiles" Dec 02 15:16:40 crc kubenswrapper[4814]: I1202 15:16:40.923301 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4129cf95-8b81-49be-a6a3-3020b12aa580" containerName="collect-profiles" Dec 02 15:16:40 crc kubenswrapper[4814]: I1202 15:16:40.923543 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="4129cf95-8b81-49be-a6a3-3020b12aa580" containerName="collect-profiles" Dec 02 15:16:40 crc kubenswrapper[4814]: I1202 15:16:40.924356 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" Dec 02 15:16:40 crc kubenswrapper[4814]: I1202 15:16:40.941489 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher29ce-account-delete-9ftns"] Dec 02 15:16:40 crc kubenswrapper[4814]: I1202 15:16:40.970249 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:16:40 crc kubenswrapper[4814]: I1202 15:16:40.970500 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="1b53b09c-4d33-4898-99a0-77742974a99a" containerName="watcher-kuttl-api-log" containerID="cri-o://5997e7fa0e3a33555ecfa195f5d88b5715fd236b4b697988843c1b0e1d4c7c7c" gracePeriod=30 Dec 02 15:16:40 crc kubenswrapper[4814]: I1202 15:16:40.970653 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="1b53b09c-4d33-4898-99a0-77742974a99a" containerName="watcher-api" containerID="cri-o://193d9ad0ef5ca7a2ddbc2be75c47a76950e389e9cf686ec3f9adf6eb363df8d3" gracePeriod=30 Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.020642 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2000b92-efd2-4794-9e7f-0f048dc44fe5-operator-scripts\") pod \"watcher29ce-account-delete-9ftns\" (UID: \"a2000b92-efd2-4794-9e7f-0f048dc44fe5\") " pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.020976 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmx5b\" (UniqueName: \"kubernetes.io/projected/a2000b92-efd2-4794-9e7f-0f048dc44fe5-kube-api-access-kmx5b\") pod \"watcher29ce-account-delete-9ftns\" (UID: \"a2000b92-efd2-4794-9e7f-0f048dc44fe5\") " pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.025842 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.052807 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.053039 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="a02474f6-d699-4ee7-818a-9aac661e1c27" containerName="watcher-applier" containerID="cri-o://4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de" gracePeriod=30 Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.122503 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmx5b\" (UniqueName: \"kubernetes.io/projected/a2000b92-efd2-4794-9e7f-0f048dc44fe5-kube-api-access-kmx5b\") pod \"watcher29ce-account-delete-9ftns\" (UID: \"a2000b92-efd2-4794-9e7f-0f048dc44fe5\") " pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.122619 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2000b92-efd2-4794-9e7f-0f048dc44fe5-operator-scripts\") pod \"watcher29ce-account-delete-9ftns\" (UID: \"a2000b92-efd2-4794-9e7f-0f048dc44fe5\") " pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.123374 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2000b92-efd2-4794-9e7f-0f048dc44fe5-operator-scripts\") pod \"watcher29ce-account-delete-9ftns\" (UID: \"a2000b92-efd2-4794-9e7f-0f048dc44fe5\") " pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" Dec 02 15:16:41 crc kubenswrapper[4814]: E1202 15:16:41.134648 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 15:16:41 crc kubenswrapper[4814]: E1202 15:16:41.135916 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.142906 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmx5b\" (UniqueName: \"kubernetes.io/projected/a2000b92-efd2-4794-9e7f-0f048dc44fe5-kube-api-access-kmx5b\") pod \"watcher29ce-account-delete-9ftns\" (UID: \"a2000b92-efd2-4794-9e7f-0f048dc44fe5\") " pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" Dec 02 15:16:41 crc kubenswrapper[4814]: E1202 15:16:41.145246 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 15:16:41 crc kubenswrapper[4814]: E1202 15:16:41.145302 4814 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="a02474f6-d699-4ee7-818a-9aac661e1c27" containerName="watcher-applier" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.217177 4814 generic.go:334] "Generic (PLEG): container finished" podID="1b53b09c-4d33-4898-99a0-77742974a99a" containerID="5997e7fa0e3a33555ecfa195f5d88b5715fd236b4b697988843c1b0e1d4c7c7c" exitCode=143 Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.217219 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1b53b09c-4d33-4898-99a0-77742974a99a","Type":"ContainerDied","Data":"5997e7fa0e3a33555ecfa195f5d88b5715fd236b4b697988843c1b0e1d4c7c7c"} Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.257588 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.367413 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.429206 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzd97\" (UniqueName: \"kubernetes.io/projected/e3f4591b-3da5-4951-9288-57f89b217467-kube-api-access-tzd97\") pod \"e3f4591b-3da5-4951-9288-57f89b217467\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.429271 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-config-data\") pod \"e3f4591b-3da5-4951-9288-57f89b217467\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.429344 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f4591b-3da5-4951-9288-57f89b217467-logs\") pod \"e3f4591b-3da5-4951-9288-57f89b217467\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.429376 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-combined-ca-bundle\") pod \"e3f4591b-3da5-4951-9288-57f89b217467\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.429435 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-custom-prometheus-ca\") pod \"e3f4591b-3da5-4951-9288-57f89b217467\" (UID: \"e3f4591b-3da5-4951-9288-57f89b217467\") " Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.436655 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f4591b-3da5-4951-9288-57f89b217467-logs" (OuterVolumeSpecName: "logs") pod "e3f4591b-3da5-4951-9288-57f89b217467" (UID: "e3f4591b-3da5-4951-9288-57f89b217467"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.437931 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f4591b-3da5-4951-9288-57f89b217467-kube-api-access-tzd97" (OuterVolumeSpecName: "kube-api-access-tzd97") pod "e3f4591b-3da5-4951-9288-57f89b217467" (UID: "e3f4591b-3da5-4951-9288-57f89b217467"). InnerVolumeSpecName "kube-api-access-tzd97". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.465238 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3f4591b-3da5-4951-9288-57f89b217467" (UID: "e3f4591b-3da5-4951-9288-57f89b217467"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.467414 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e3f4591b-3da5-4951-9288-57f89b217467" (UID: "e3f4591b-3da5-4951-9288-57f89b217467"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.522935 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-config-data" (OuterVolumeSpecName: "config-data") pod "e3f4591b-3da5-4951-9288-57f89b217467" (UID: "e3f4591b-3da5-4951-9288-57f89b217467"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.531333 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.531383 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f4591b-3da5-4951-9288-57f89b217467-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.531400 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.531414 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e3f4591b-3da5-4951-9288-57f89b217467-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.531427 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzd97\" (UniqueName: \"kubernetes.io/projected/e3f4591b-3da5-4951-9288-57f89b217467-kube-api-access-tzd97\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.845370 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="1b53b09c-4d33-4898-99a0-77742974a99a" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.180:9322/\": read tcp 10.217.0.2:34400->10.217.0.180:9322: read: connection reset by peer" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.845975 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="1b53b09c-4d33-4898-99a0-77742974a99a" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.180:9322/\": read tcp 10.217.0.2:34398->10.217.0.180:9322: read: connection reset by peer" Dec 02 15:16:41 crc kubenswrapper[4814]: I1202 15:16:41.866693 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher29ce-account-delete-9ftns"] Dec 02 15:16:41 crc kubenswrapper[4814]: W1202 15:16:41.934125 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2000b92_efd2_4794_9e7f_0f048dc44fe5.slice/crio-74bccccc05fdfad6ab9d6fa967f83b97df60c1f2f5488706b34dbab54fe45ce3 WatchSource:0}: Error finding container 74bccccc05fdfad6ab9d6fa967f83b97df60c1f2f5488706b34dbab54fe45ce3: Status 404 returned error can't find the container with id 74bccccc05fdfad6ab9d6fa967f83b97df60c1f2f5488706b34dbab54fe45ce3 Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.156007 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48032874-84c0-4869-9ff8-86cf9e4ea9a6" path="/var/lib/kubelet/pods/48032874-84c0-4869-9ff8-86cf9e4ea9a6/volumes" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.217009 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.227065 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.227686 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3f4591b-3da5-4951-9288-57f89b217467","Type":"ContainerDied","Data":"adda0447597d58238b31c03e32a2fae74807a1c52b6c39513869c4041a273cc4"} Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.227721 4814 scope.go:117] "RemoveContainer" containerID="b8d0e5f0e516e604b2fdd60e2cea50201d14756eabe7831e8468b85bfbad49a1" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.241700 4814 generic.go:334] "Generic (PLEG): container finished" podID="1b53b09c-4d33-4898-99a0-77742974a99a" containerID="193d9ad0ef5ca7a2ddbc2be75c47a76950e389e9cf686ec3f9adf6eb363df8d3" exitCode=0 Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.242026 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1b53b09c-4d33-4898-99a0-77742974a99a","Type":"ContainerDied","Data":"193d9ad0ef5ca7a2ddbc2be75c47a76950e389e9cf686ec3f9adf6eb363df8d3"} Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.242096 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1b53b09c-4d33-4898-99a0-77742974a99a","Type":"ContainerDied","Data":"5192ec768be51ceb492585cadeffc74fa8de61256d00fea9d6681d8ce6ea0f59"} Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.242252 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.251594 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" event={"ID":"a2000b92-efd2-4794-9e7f-0f048dc44fe5","Type":"ContainerStarted","Data":"ee054742134987329b47688b18bcb8eec0ccbb6ff7ad683bc16e1571388ef0c2"} Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.251628 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" event={"ID":"a2000b92-efd2-4794-9e7f-0f048dc44fe5","Type":"ContainerStarted","Data":"74bccccc05fdfad6ab9d6fa967f83b97df60c1f2f5488706b34dbab54fe45ce3"} Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.266580 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.274178 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.276529 4814 scope.go:117] "RemoveContainer" containerID="193d9ad0ef5ca7a2ddbc2be75c47a76950e389e9cf686ec3f9adf6eb363df8d3" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.287259 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" podStartSLOduration=2.287242277 podStartE2EDuration="2.287242277s" podCreationTimestamp="2025-12-02 15:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:42.283537165 +0000 UTC m=+2864.407899369" watchObservedRunningTime="2025-12-02 15:16:42.287242277 +0000 UTC m=+2864.411604481" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.299436 4814 scope.go:117] "RemoveContainer" containerID="5997e7fa0e3a33555ecfa195f5d88b5715fd236b4b697988843c1b0e1d4c7c7c" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.344938 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b53b09c-4d33-4898-99a0-77742974a99a-logs\") pod \"1b53b09c-4d33-4898-99a0-77742974a99a\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.344984 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcz8z\" (UniqueName: \"kubernetes.io/projected/1b53b09c-4d33-4898-99a0-77742974a99a-kube-api-access-hcz8z\") pod \"1b53b09c-4d33-4898-99a0-77742974a99a\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.345087 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-custom-prometheus-ca\") pod \"1b53b09c-4d33-4898-99a0-77742974a99a\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.345139 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-config-data\") pod \"1b53b09c-4d33-4898-99a0-77742974a99a\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.345193 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-combined-ca-bundle\") pod \"1b53b09c-4d33-4898-99a0-77742974a99a\" (UID: \"1b53b09c-4d33-4898-99a0-77742974a99a\") " Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.346207 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b53b09c-4d33-4898-99a0-77742974a99a-logs" (OuterVolumeSpecName: "logs") pod "1b53b09c-4d33-4898-99a0-77742974a99a" (UID: "1b53b09c-4d33-4898-99a0-77742974a99a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.367474 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b53b09c-4d33-4898-99a0-77742974a99a-kube-api-access-hcz8z" (OuterVolumeSpecName: "kube-api-access-hcz8z") pod "1b53b09c-4d33-4898-99a0-77742974a99a" (UID: "1b53b09c-4d33-4898-99a0-77742974a99a"). InnerVolumeSpecName "kube-api-access-hcz8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.375978 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b53b09c-4d33-4898-99a0-77742974a99a" (UID: "1b53b09c-4d33-4898-99a0-77742974a99a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.377834 4814 scope.go:117] "RemoveContainer" containerID="193d9ad0ef5ca7a2ddbc2be75c47a76950e389e9cf686ec3f9adf6eb363df8d3" Dec 02 15:16:42 crc kubenswrapper[4814]: E1202 15:16:42.378288 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"193d9ad0ef5ca7a2ddbc2be75c47a76950e389e9cf686ec3f9adf6eb363df8d3\": container with ID starting with 193d9ad0ef5ca7a2ddbc2be75c47a76950e389e9cf686ec3f9adf6eb363df8d3 not found: ID does not exist" containerID="193d9ad0ef5ca7a2ddbc2be75c47a76950e389e9cf686ec3f9adf6eb363df8d3" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.378390 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"193d9ad0ef5ca7a2ddbc2be75c47a76950e389e9cf686ec3f9adf6eb363df8d3"} err="failed to get container status \"193d9ad0ef5ca7a2ddbc2be75c47a76950e389e9cf686ec3f9adf6eb363df8d3\": rpc error: code = NotFound desc = could not find container \"193d9ad0ef5ca7a2ddbc2be75c47a76950e389e9cf686ec3f9adf6eb363df8d3\": container with ID starting with 193d9ad0ef5ca7a2ddbc2be75c47a76950e389e9cf686ec3f9adf6eb363df8d3 not found: ID does not exist" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.378467 4814 scope.go:117] "RemoveContainer" containerID="5997e7fa0e3a33555ecfa195f5d88b5715fd236b4b697988843c1b0e1d4c7c7c" Dec 02 15:16:42 crc kubenswrapper[4814]: E1202 15:16:42.378738 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5997e7fa0e3a33555ecfa195f5d88b5715fd236b4b697988843c1b0e1d4c7c7c\": container with ID starting with 5997e7fa0e3a33555ecfa195f5d88b5715fd236b4b697988843c1b0e1d4c7c7c not found: ID does not exist" containerID="5997e7fa0e3a33555ecfa195f5d88b5715fd236b4b697988843c1b0e1d4c7c7c" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.378850 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5997e7fa0e3a33555ecfa195f5d88b5715fd236b4b697988843c1b0e1d4c7c7c"} err="failed to get container status \"5997e7fa0e3a33555ecfa195f5d88b5715fd236b4b697988843c1b0e1d4c7c7c\": rpc error: code = NotFound desc = could not find container \"5997e7fa0e3a33555ecfa195f5d88b5715fd236b4b697988843c1b0e1d4c7c7c\": container with ID starting with 5997e7fa0e3a33555ecfa195f5d88b5715fd236b4b697988843c1b0e1d4c7c7c not found: ID does not exist" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.393302 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "1b53b09c-4d33-4898-99a0-77742974a99a" (UID: "1b53b09c-4d33-4898-99a0-77742974a99a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.426790 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-config-data" (OuterVolumeSpecName: "config-data") pod "1b53b09c-4d33-4898-99a0-77742974a99a" (UID: "1b53b09c-4d33-4898-99a0-77742974a99a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.446907 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.446937 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.446946 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b53b09c-4d33-4898-99a0-77742974a99a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.446954 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b53b09c-4d33-4898-99a0-77742974a99a-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.446962 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcz8z\" (UniqueName: \"kubernetes.io/projected/1b53b09c-4d33-4898-99a0-77742974a99a-kube-api-access-hcz8z\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.574085 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:16:42 crc kubenswrapper[4814]: I1202 15:16:42.584758 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:16:43 crc kubenswrapper[4814]: I1202 15:16:43.267154 4814 generic.go:334] "Generic (PLEG): container finished" podID="a2000b92-efd2-4794-9e7f-0f048dc44fe5" containerID="ee054742134987329b47688b18bcb8eec0ccbb6ff7ad683bc16e1571388ef0c2" exitCode=0 Dec 02 15:16:43 crc kubenswrapper[4814]: I1202 15:16:43.267392 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" event={"ID":"a2000b92-efd2-4794-9e7f-0f048dc44fe5","Type":"ContainerDied","Data":"ee054742134987329b47688b18bcb8eec0ccbb6ff7ad683bc16e1571388ef0c2"} Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.156168 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b53b09c-4d33-4898-99a0-77742974a99a" path="/var/lib/kubelet/pods/1b53b09c-4d33-4898-99a0-77742974a99a/volumes" Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.157063 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3f4591b-3da5-4951-9288-57f89b217467" path="/var/lib/kubelet/pods/e3f4591b-3da5-4951-9288-57f89b217467/volumes" Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.691575 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.783970 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2000b92-efd2-4794-9e7f-0f048dc44fe5-operator-scripts\") pod \"a2000b92-efd2-4794-9e7f-0f048dc44fe5\" (UID: \"a2000b92-efd2-4794-9e7f-0f048dc44fe5\") " Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.784043 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmx5b\" (UniqueName: \"kubernetes.io/projected/a2000b92-efd2-4794-9e7f-0f048dc44fe5-kube-api-access-kmx5b\") pod \"a2000b92-efd2-4794-9e7f-0f048dc44fe5\" (UID: \"a2000b92-efd2-4794-9e7f-0f048dc44fe5\") " Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.784816 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2000b92-efd2-4794-9e7f-0f048dc44fe5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a2000b92-efd2-4794-9e7f-0f048dc44fe5" (UID: "a2000b92-efd2-4794-9e7f-0f048dc44fe5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.789644 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2000b92-efd2-4794-9e7f-0f048dc44fe5-kube-api-access-kmx5b" (OuterVolumeSpecName: "kube-api-access-kmx5b") pod "a2000b92-efd2-4794-9e7f-0f048dc44fe5" (UID: "a2000b92-efd2-4794-9e7f-0f048dc44fe5"). InnerVolumeSpecName "kube-api-access-kmx5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.845575 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.885683 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2000b92-efd2-4794-9e7f-0f048dc44fe5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.885714 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmx5b\" (UniqueName: \"kubernetes.io/projected/a2000b92-efd2-4794-9e7f-0f048dc44fe5-kube-api-access-kmx5b\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.986689 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a02474f6-d699-4ee7-818a-9aac661e1c27-logs\") pod \"a02474f6-d699-4ee7-818a-9aac661e1c27\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.986746 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw77r\" (UniqueName: \"kubernetes.io/projected/a02474f6-d699-4ee7-818a-9aac661e1c27-kube-api-access-fw77r\") pod \"a02474f6-d699-4ee7-818a-9aac661e1c27\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.986794 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02474f6-d699-4ee7-818a-9aac661e1c27-combined-ca-bundle\") pod \"a02474f6-d699-4ee7-818a-9aac661e1c27\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.986817 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02474f6-d699-4ee7-818a-9aac661e1c27-config-data\") pod \"a02474f6-d699-4ee7-818a-9aac661e1c27\" (UID: \"a02474f6-d699-4ee7-818a-9aac661e1c27\") " Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.986987 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a02474f6-d699-4ee7-818a-9aac661e1c27-logs" (OuterVolumeSpecName: "logs") pod "a02474f6-d699-4ee7-818a-9aac661e1c27" (UID: "a02474f6-d699-4ee7-818a-9aac661e1c27"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.987470 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a02474f6-d699-4ee7-818a-9aac661e1c27-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:44 crc kubenswrapper[4814]: I1202 15:16:44.989953 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02474f6-d699-4ee7-818a-9aac661e1c27-kube-api-access-fw77r" (OuterVolumeSpecName: "kube-api-access-fw77r") pod "a02474f6-d699-4ee7-818a-9aac661e1c27" (UID: "a02474f6-d699-4ee7-818a-9aac661e1c27"). InnerVolumeSpecName "kube-api-access-fw77r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.018501 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a02474f6-d699-4ee7-818a-9aac661e1c27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a02474f6-d699-4ee7-818a-9aac661e1c27" (UID: "a02474f6-d699-4ee7-818a-9aac661e1c27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.028905 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a02474f6-d699-4ee7-818a-9aac661e1c27-config-data" (OuterVolumeSpecName: "config-data") pod "a02474f6-d699-4ee7-818a-9aac661e1c27" (UID: "a02474f6-d699-4ee7-818a-9aac661e1c27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.088919 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw77r\" (UniqueName: \"kubernetes.io/projected/a02474f6-d699-4ee7-818a-9aac661e1c27-kube-api-access-fw77r\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.088954 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02474f6-d699-4ee7-818a-9aac661e1c27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.088964 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02474f6-d699-4ee7-818a-9aac661e1c27-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.285082 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" event={"ID":"a2000b92-efd2-4794-9e7f-0f048dc44fe5","Type":"ContainerDied","Data":"74bccccc05fdfad6ab9d6fa967f83b97df60c1f2f5488706b34dbab54fe45ce3"} Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.285628 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74bccccc05fdfad6ab9d6fa967f83b97df60c1f2f5488706b34dbab54fe45ce3" Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.285148 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher29ce-account-delete-9ftns" Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.286647 4814 generic.go:334] "Generic (PLEG): container finished" podID="a02474f6-d699-4ee7-818a-9aac661e1c27" containerID="4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de" exitCode=0 Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.286696 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a02474f6-d699-4ee7-818a-9aac661e1c27","Type":"ContainerDied","Data":"4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de"} Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.286701 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.286749 4814 scope.go:117] "RemoveContainer" containerID="4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de" Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.286736 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a02474f6-d699-4ee7-818a-9aac661e1c27","Type":"ContainerDied","Data":"6acf4dfa98a0cc5b802135e84e3ede55c7c013e409f75451488a97886e4d11dd"} Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.314318 4814 scope.go:117] "RemoveContainer" containerID="4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de" Dec 02 15:16:45 crc kubenswrapper[4814]: E1202 15:16:45.314961 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de\": container with ID starting with 4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de not found: ID does not exist" containerID="4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de" Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.314998 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de"} err="failed to get container status \"4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de\": rpc error: code = NotFound desc = could not find container \"4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de\": container with ID starting with 4c3ebab0cad03da2f720b9e295dca94627bff264cdef0cfe76a08c3e5e9b87de not found: ID does not exist" Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.347325 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.356195 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.948831 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-j2dll"] Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.955913 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-j2dll"] Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.978576 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-29ce-account-create-update-848m5"] Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.984530 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher29ce-account-delete-9ftns"] Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.990306 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-29ce-account-create-update-848m5"] Dec 02 15:16:45 crc kubenswrapper[4814]: I1202 15:16:45.996381 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher29ce-account-delete-9ftns"] Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.127481 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-xvxsj"] Dec 02 15:16:46 crc kubenswrapper[4814]: E1202 15:16:46.127809 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.127825 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: E1202 15:16:46.127838 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02474f6-d699-4ee7-818a-9aac661e1c27" containerName="watcher-applier" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.127845 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02474f6-d699-4ee7-818a-9aac661e1c27" containerName="watcher-applier" Dec 02 15:16:46 crc kubenswrapper[4814]: E1202 15:16:46.127853 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.127860 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: E1202 15:16:46.127867 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.127872 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: E1202 15:16:46.127881 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.127887 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: E1202 15:16:46.127897 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.127904 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: E1202 15:16:46.127921 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2000b92-efd2-4794-9e7f-0f048dc44fe5" containerName="mariadb-account-delete" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.127928 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2000b92-efd2-4794-9e7f-0f048dc44fe5" containerName="mariadb-account-delete" Dec 02 15:16:46 crc kubenswrapper[4814]: E1202 15:16:46.127937 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b53b09c-4d33-4898-99a0-77742974a99a" containerName="watcher-kuttl-api-log" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.127945 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b53b09c-4d33-4898-99a0-77742974a99a" containerName="watcher-kuttl-api-log" Dec 02 15:16:46 crc kubenswrapper[4814]: E1202 15:16:46.127964 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b53b09c-4d33-4898-99a0-77742974a99a" containerName="watcher-api" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.127971 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b53b09c-4d33-4898-99a0-77742974a99a" containerName="watcher-api" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.128122 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.128132 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.128144 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a02474f6-d699-4ee7-818a-9aac661e1c27" containerName="watcher-applier" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.128153 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.128167 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b53b09c-4d33-4898-99a0-77742974a99a" containerName="watcher-api" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.128176 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2000b92-efd2-4794-9e7f-0f048dc44fe5" containerName="mariadb-account-delete" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.128182 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b53b09c-4d33-4898-99a0-77742974a99a" containerName="watcher-kuttl-api-log" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.128702 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-xvxsj" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.136674 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-d647-account-create-update-n5j8w"] Dec 02 15:16:46 crc kubenswrapper[4814]: E1202 15:16:46.137067 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.137087 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.137303 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.137323 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.137337 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f4591b-3da5-4951-9288-57f89b217467" containerName="watcher-decision-engine" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.137925 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.148350 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.185603 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c242a03-42d3-47e8-9226-511f118bcc63" path="/var/lib/kubelet/pods/0c242a03-42d3-47e8-9226-511f118bcc63/volumes" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.186208 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a02474f6-d699-4ee7-818a-9aac661e1c27" path="/var/lib/kubelet/pods/a02474f6-d699-4ee7-818a-9aac661e1c27/volumes" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.186756 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2000b92-efd2-4794-9e7f-0f048dc44fe5" path="/var/lib/kubelet/pods/a2000b92-efd2-4794-9e7f-0f048dc44fe5/volumes" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.187913 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cca878c5-ec56-4e83-a0c5-c34d2bb25765" path="/var/lib/kubelet/pods/cca878c5-ec56-4e83-a0c5-c34d2bb25765/volumes" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.188472 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-xvxsj"] Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.188496 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-d647-account-create-update-n5j8w"] Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.205144 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrv6s\" (UniqueName: \"kubernetes.io/projected/d7fa1510-c400-479b-9471-f8c1514984c0-kube-api-access-qrv6s\") pod \"watcher-db-create-xvxsj\" (UID: \"d7fa1510-c400-479b-9471-f8c1514984c0\") " pod="watcher-kuttl-default/watcher-db-create-xvxsj" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.208794 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7fa1510-c400-479b-9471-f8c1514984c0-operator-scripts\") pod \"watcher-db-create-xvxsj\" (UID: \"d7fa1510-c400-479b-9471-f8c1514984c0\") " pod="watcher-kuttl-default/watcher-db-create-xvxsj" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.208926 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/001710ba-b668-41f4-a817-fcd45782dd05-operator-scripts\") pod \"watcher-d647-account-create-update-n5j8w\" (UID: \"001710ba-b668-41f4-a817-fcd45782dd05\") " pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.209053 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs4mn\" (UniqueName: \"kubernetes.io/projected/001710ba-b668-41f4-a817-fcd45782dd05-kube-api-access-fs4mn\") pod \"watcher-d647-account-create-update-n5j8w\" (UID: \"001710ba-b668-41f4-a817-fcd45782dd05\") " pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.311574 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7fa1510-c400-479b-9471-f8c1514984c0-operator-scripts\") pod \"watcher-db-create-xvxsj\" (UID: \"d7fa1510-c400-479b-9471-f8c1514984c0\") " pod="watcher-kuttl-default/watcher-db-create-xvxsj" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.311628 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/001710ba-b668-41f4-a817-fcd45782dd05-operator-scripts\") pod \"watcher-d647-account-create-update-n5j8w\" (UID: \"001710ba-b668-41f4-a817-fcd45782dd05\") " pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.311670 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs4mn\" (UniqueName: \"kubernetes.io/projected/001710ba-b668-41f4-a817-fcd45782dd05-kube-api-access-fs4mn\") pod \"watcher-d647-account-create-update-n5j8w\" (UID: \"001710ba-b668-41f4-a817-fcd45782dd05\") " pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.311720 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrv6s\" (UniqueName: \"kubernetes.io/projected/d7fa1510-c400-479b-9471-f8c1514984c0-kube-api-access-qrv6s\") pod \"watcher-db-create-xvxsj\" (UID: \"d7fa1510-c400-479b-9471-f8c1514984c0\") " pod="watcher-kuttl-default/watcher-db-create-xvxsj" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.312725 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/001710ba-b668-41f4-a817-fcd45782dd05-operator-scripts\") pod \"watcher-d647-account-create-update-n5j8w\" (UID: \"001710ba-b668-41f4-a817-fcd45782dd05\") " pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.312906 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7fa1510-c400-479b-9471-f8c1514984c0-operator-scripts\") pod \"watcher-db-create-xvxsj\" (UID: \"d7fa1510-c400-479b-9471-f8c1514984c0\") " pod="watcher-kuttl-default/watcher-db-create-xvxsj" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.328804 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs4mn\" (UniqueName: \"kubernetes.io/projected/001710ba-b668-41f4-a817-fcd45782dd05-kube-api-access-fs4mn\") pod \"watcher-d647-account-create-update-n5j8w\" (UID: \"001710ba-b668-41f4-a817-fcd45782dd05\") " pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.329524 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrv6s\" (UniqueName: \"kubernetes.io/projected/d7fa1510-c400-479b-9471-f8c1514984c0-kube-api-access-qrv6s\") pod \"watcher-db-create-xvxsj\" (UID: \"d7fa1510-c400-479b-9471-f8c1514984c0\") " pod="watcher-kuttl-default/watcher-db-create-xvxsj" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.478562 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-xvxsj" Dec 02 15:16:46 crc kubenswrapper[4814]: I1202 15:16:46.487426 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" Dec 02 15:16:47 crc kubenswrapper[4814]: I1202 15:16:47.010430 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-xvxsj"] Dec 02 15:16:47 crc kubenswrapper[4814]: I1202 15:16:47.104734 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-d647-account-create-update-n5j8w"] Dec 02 15:16:47 crc kubenswrapper[4814]: I1202 15:16:47.303217 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-xvxsj" event={"ID":"d7fa1510-c400-479b-9471-f8c1514984c0","Type":"ContainerStarted","Data":"1ec825873efcd41122d412cc0a5abd1226d53aa8a782a7e5509641d377cddc02"} Dec 02 15:16:47 crc kubenswrapper[4814]: I1202 15:16:47.303276 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-xvxsj" event={"ID":"d7fa1510-c400-479b-9471-f8c1514984c0","Type":"ContainerStarted","Data":"f27620990de9a05110deaf6f3b31ef0973554f2d5154f605e8ff4bd66a025966"} Dec 02 15:16:47 crc kubenswrapper[4814]: I1202 15:16:47.305755 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" event={"ID":"001710ba-b668-41f4-a817-fcd45782dd05","Type":"ContainerStarted","Data":"bab05be1e0dde0211684a40cd93614a2f42804549ba6ff224d6ffb7fd1e2e43c"} Dec 02 15:16:47 crc kubenswrapper[4814]: I1202 15:16:47.305785 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" event={"ID":"001710ba-b668-41f4-a817-fcd45782dd05","Type":"ContainerStarted","Data":"87beba44b22c51622878ee90ed07f98eded4cd28872838e483cdb01c3088a18a"} Dec 02 15:16:47 crc kubenswrapper[4814]: I1202 15:16:47.322098 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db-create-xvxsj" podStartSLOduration=1.322077927 podStartE2EDuration="1.322077927s" podCreationTimestamp="2025-12-02 15:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:47.318630072 +0000 UTC m=+2869.442992286" watchObservedRunningTime="2025-12-02 15:16:47.322077927 +0000 UTC m=+2869.446440131" Dec 02 15:16:47 crc kubenswrapper[4814]: I1202 15:16:47.340907 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" podStartSLOduration=1.340886971 podStartE2EDuration="1.340886971s" podCreationTimestamp="2025-12-02 15:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:47.333686424 +0000 UTC m=+2869.458048628" watchObservedRunningTime="2025-12-02 15:16:47.340886971 +0000 UTC m=+2869.465249185" Dec 02 15:16:48 crc kubenswrapper[4814]: I1202 15:16:48.314764 4814 generic.go:334] "Generic (PLEG): container finished" podID="d7fa1510-c400-479b-9471-f8c1514984c0" containerID="1ec825873efcd41122d412cc0a5abd1226d53aa8a782a7e5509641d377cddc02" exitCode=0 Dec 02 15:16:48 crc kubenswrapper[4814]: I1202 15:16:48.314816 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-xvxsj" event={"ID":"d7fa1510-c400-479b-9471-f8c1514984c0","Type":"ContainerDied","Data":"1ec825873efcd41122d412cc0a5abd1226d53aa8a782a7e5509641d377cddc02"} Dec 02 15:16:48 crc kubenswrapper[4814]: I1202 15:16:48.318444 4814 generic.go:334] "Generic (PLEG): container finished" podID="001710ba-b668-41f4-a817-fcd45782dd05" containerID="bab05be1e0dde0211684a40cd93614a2f42804549ba6ff224d6ffb7fd1e2e43c" exitCode=0 Dec 02 15:16:48 crc kubenswrapper[4814]: I1202 15:16:48.318602 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" event={"ID":"001710ba-b668-41f4-a817-fcd45782dd05","Type":"ContainerDied","Data":"bab05be1e0dde0211684a40cd93614a2f42804549ba6ff224d6ffb7fd1e2e43c"} Dec 02 15:16:49 crc kubenswrapper[4814]: I1202 15:16:49.680944 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-xvxsj" Dec 02 15:16:49 crc kubenswrapper[4814]: I1202 15:16:49.793301 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" Dec 02 15:16:49 crc kubenswrapper[4814]: I1202 15:16:49.870976 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7fa1510-c400-479b-9471-f8c1514984c0-operator-scripts\") pod \"d7fa1510-c400-479b-9471-f8c1514984c0\" (UID: \"d7fa1510-c400-479b-9471-f8c1514984c0\") " Dec 02 15:16:49 crc kubenswrapper[4814]: I1202 15:16:49.871738 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrv6s\" (UniqueName: \"kubernetes.io/projected/d7fa1510-c400-479b-9471-f8c1514984c0-kube-api-access-qrv6s\") pod \"d7fa1510-c400-479b-9471-f8c1514984c0\" (UID: \"d7fa1510-c400-479b-9471-f8c1514984c0\") " Dec 02 15:16:49 crc kubenswrapper[4814]: I1202 15:16:49.872071 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7fa1510-c400-479b-9471-f8c1514984c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7fa1510-c400-479b-9471-f8c1514984c0" (UID: "d7fa1510-c400-479b-9471-f8c1514984c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:16:49 crc kubenswrapper[4814]: I1202 15:16:49.887407 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7fa1510-c400-479b-9471-f8c1514984c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:49 crc kubenswrapper[4814]: I1202 15:16:49.888565 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7fa1510-c400-479b-9471-f8c1514984c0-kube-api-access-qrv6s" (OuterVolumeSpecName: "kube-api-access-qrv6s") pod "d7fa1510-c400-479b-9471-f8c1514984c0" (UID: "d7fa1510-c400-479b-9471-f8c1514984c0"). InnerVolumeSpecName "kube-api-access-qrv6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:49 crc kubenswrapper[4814]: I1202 15:16:49.988052 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs4mn\" (UniqueName: \"kubernetes.io/projected/001710ba-b668-41f4-a817-fcd45782dd05-kube-api-access-fs4mn\") pod \"001710ba-b668-41f4-a817-fcd45782dd05\" (UID: \"001710ba-b668-41f4-a817-fcd45782dd05\") " Dec 02 15:16:49 crc kubenswrapper[4814]: I1202 15:16:49.988125 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/001710ba-b668-41f4-a817-fcd45782dd05-operator-scripts\") pod \"001710ba-b668-41f4-a817-fcd45782dd05\" (UID: \"001710ba-b668-41f4-a817-fcd45782dd05\") " Dec 02 15:16:49 crc kubenswrapper[4814]: I1202 15:16:49.988634 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/001710ba-b668-41f4-a817-fcd45782dd05-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "001710ba-b668-41f4-a817-fcd45782dd05" (UID: "001710ba-b668-41f4-a817-fcd45782dd05"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:16:49 crc kubenswrapper[4814]: I1202 15:16:49.988839 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrv6s\" (UniqueName: \"kubernetes.io/projected/d7fa1510-c400-479b-9471-f8c1514984c0-kube-api-access-qrv6s\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:49 crc kubenswrapper[4814]: I1202 15:16:49.993602 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/001710ba-b668-41f4-a817-fcd45782dd05-kube-api-access-fs4mn" (OuterVolumeSpecName: "kube-api-access-fs4mn") pod "001710ba-b668-41f4-a817-fcd45782dd05" (UID: "001710ba-b668-41f4-a817-fcd45782dd05"). InnerVolumeSpecName "kube-api-access-fs4mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:50 crc kubenswrapper[4814]: I1202 15:16:50.090032 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs4mn\" (UniqueName: \"kubernetes.io/projected/001710ba-b668-41f4-a817-fcd45782dd05-kube-api-access-fs4mn\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:50 crc kubenswrapper[4814]: I1202 15:16:50.090063 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/001710ba-b668-41f4-a817-fcd45782dd05-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:50 crc kubenswrapper[4814]: I1202 15:16:50.339301 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-xvxsj" event={"ID":"d7fa1510-c400-479b-9471-f8c1514984c0","Type":"ContainerDied","Data":"f27620990de9a05110deaf6f3b31ef0973554f2d5154f605e8ff4bd66a025966"} Dec 02 15:16:50 crc kubenswrapper[4814]: I1202 15:16:50.339345 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-xvxsj" Dec 02 15:16:50 crc kubenswrapper[4814]: I1202 15:16:50.339362 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f27620990de9a05110deaf6f3b31ef0973554f2d5154f605e8ff4bd66a025966" Dec 02 15:16:50 crc kubenswrapper[4814]: I1202 15:16:50.342033 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" event={"ID":"001710ba-b668-41f4-a817-fcd45782dd05","Type":"ContainerDied","Data":"87beba44b22c51622878ee90ed07f98eded4cd28872838e483cdb01c3088a18a"} Dec 02 15:16:50 crc kubenswrapper[4814]: I1202 15:16:50.342061 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87beba44b22c51622878ee90ed07f98eded4cd28872838e483cdb01c3088a18a" Dec 02 15:16:50 crc kubenswrapper[4814]: I1202 15:16:50.342139 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-d647-account-create-update-n5j8w" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.520112 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp"] Dec 02 15:16:51 crc kubenswrapper[4814]: E1202 15:16:51.520712 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="001710ba-b668-41f4-a817-fcd45782dd05" containerName="mariadb-account-create-update" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.520727 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="001710ba-b668-41f4-a817-fcd45782dd05" containerName="mariadb-account-create-update" Dec 02 15:16:51 crc kubenswrapper[4814]: E1202 15:16:51.520747 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7fa1510-c400-479b-9471-f8c1514984c0" containerName="mariadb-database-create" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.520756 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7fa1510-c400-479b-9471-f8c1514984c0" containerName="mariadb-database-create" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.520956 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7fa1510-c400-479b-9471-f8c1514984c0" containerName="mariadb-database-create" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.520966 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="001710ba-b668-41f4-a817-fcd45782dd05" containerName="mariadb-account-create-update" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.521517 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.525479 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.537972 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp"] Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.538221 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-qplrg" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.715313 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bdfjp\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.715407 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drh4j\" (UniqueName: \"kubernetes.io/projected/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-kube-api-access-drh4j\") pod \"watcher-kuttl-db-sync-bdfjp\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.715445 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bdfjp\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.715539 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-config-data\") pod \"watcher-kuttl-db-sync-bdfjp\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.817031 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drh4j\" (UniqueName: \"kubernetes.io/projected/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-kube-api-access-drh4j\") pod \"watcher-kuttl-db-sync-bdfjp\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.817085 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bdfjp\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.817116 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-config-data\") pod \"watcher-kuttl-db-sync-bdfjp\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.817231 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bdfjp\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.822547 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bdfjp\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.822560 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bdfjp\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.822990 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-config-data\") pod \"watcher-kuttl-db-sync-bdfjp\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.834022 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drh4j\" (UniqueName: \"kubernetes.io/projected/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-kube-api-access-drh4j\") pod \"watcher-kuttl-db-sync-bdfjp\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:51 crc kubenswrapper[4814]: I1202 15:16:51.838105 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:52 crc kubenswrapper[4814]: I1202 15:16:52.317271 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp"] Dec 02 15:16:52 crc kubenswrapper[4814]: I1202 15:16:52.358241 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" event={"ID":"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532","Type":"ContainerStarted","Data":"2dfcf7874a0b372d829def6907444e717139878616a4b547320395e826741163"} Dec 02 15:16:53 crc kubenswrapper[4814]: I1202 15:16:53.370546 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" event={"ID":"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532","Type":"ContainerStarted","Data":"434f413a71f702f0105d3c78f35b13c833775f312e0ff59da4dade67f1e2c7a0"} Dec 02 15:16:53 crc kubenswrapper[4814]: I1202 15:16:53.398949 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" podStartSLOduration=2.398927774 podStartE2EDuration="2.398927774s" podCreationTimestamp="2025-12-02 15:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:53.390122997 +0000 UTC m=+2875.514485201" watchObservedRunningTime="2025-12-02 15:16:53.398927774 +0000 UTC m=+2875.523289978" Dec 02 15:16:55 crc kubenswrapper[4814]: E1202 15:16:55.039275 4814 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd36a7f25_7b4d_4e8b_840d_ccfd8f0e5532.slice/crio-conmon-434f413a71f702f0105d3c78f35b13c833775f312e0ff59da4dade67f1e2c7a0.scope\": RecentStats: unable to find data in memory cache]" Dec 02 15:16:55 crc kubenswrapper[4814]: I1202 15:16:55.391299 4814 generic.go:334] "Generic (PLEG): container finished" podID="d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532" containerID="434f413a71f702f0105d3c78f35b13c833775f312e0ff59da4dade67f1e2c7a0" exitCode=0 Dec 02 15:16:55 crc kubenswrapper[4814]: I1202 15:16:55.391402 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" event={"ID":"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532","Type":"ContainerDied","Data":"434f413a71f702f0105d3c78f35b13c833775f312e0ff59da4dade67f1e2c7a0"} Dec 02 15:16:56 crc kubenswrapper[4814]: I1202 15:16:56.756714 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:56 crc kubenswrapper[4814]: I1202 15:16:56.903043 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drh4j\" (UniqueName: \"kubernetes.io/projected/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-kube-api-access-drh4j\") pod \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " Dec 02 15:16:56 crc kubenswrapper[4814]: I1202 15:16:56.903128 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-db-sync-config-data\") pod \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " Dec 02 15:16:56 crc kubenswrapper[4814]: I1202 15:16:56.903155 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-config-data\") pod \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " Dec 02 15:16:56 crc kubenswrapper[4814]: I1202 15:16:56.903327 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-combined-ca-bundle\") pod \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\" (UID: \"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532\") " Dec 02 15:16:56 crc kubenswrapper[4814]: I1202 15:16:56.908456 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532" (UID: "d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:56 crc kubenswrapper[4814]: I1202 15:16:56.908477 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-kube-api-access-drh4j" (OuterVolumeSpecName: "kube-api-access-drh4j") pod "d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532" (UID: "d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532"). InnerVolumeSpecName "kube-api-access-drh4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:56 crc kubenswrapper[4814]: I1202 15:16:56.929383 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532" (UID: "d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:56 crc kubenswrapper[4814]: I1202 15:16:56.953044 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-config-data" (OuterVolumeSpecName: "config-data") pod "d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532" (UID: "d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:57 crc kubenswrapper[4814]: I1202 15:16:57.005605 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:57 crc kubenswrapper[4814]: I1202 15:16:57.005664 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drh4j\" (UniqueName: \"kubernetes.io/projected/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-kube-api-access-drh4j\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:57 crc kubenswrapper[4814]: I1202 15:16:57.005686 4814 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:57 crc kubenswrapper[4814]: I1202 15:16:57.005705 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:57 crc kubenswrapper[4814]: I1202 15:16:57.412066 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" event={"ID":"d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532","Type":"ContainerDied","Data":"2dfcf7874a0b372d829def6907444e717139878616a4b547320395e826741163"} Dec 02 15:16:57 crc kubenswrapper[4814]: I1202 15:16:57.412141 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dfcf7874a0b372d829def6907444e717139878616a4b547320395e826741163" Dec 02 15:16:57 crc kubenswrapper[4814]: I1202 15:16:57.412234 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.020544 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:16:58 crc kubenswrapper[4814]: E1202 15:16:58.020970 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532" containerName="watcher-kuttl-db-sync" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.020989 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532" containerName="watcher-kuttl-db-sync" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.021211 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532" containerName="watcher-kuttl-db-sync" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.022376 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.035222 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-qplrg" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.035501 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.061682 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.061734 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.062740 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.086534 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.107210 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.160426 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.161414 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.175593 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.176426 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.225091 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa7e0551-c005-4812-b91c-9913814f747b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.225141 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/320ded89-e851-4a10-897f-bcb7127b6d22-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"320ded89-e851-4a10-897f-bcb7127b6d22\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.225179 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/320ded89-e851-4a10-897f-bcb7127b6d22-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"320ded89-e851-4a10-897f-bcb7127b6d22\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.225209 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcxqk\" (UniqueName: \"kubernetes.io/projected/fa7e0551-c005-4812-b91c-9913814f747b-kube-api-access-jcxqk\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.225262 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fa7e0551-c005-4812-b91c-9913814f747b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.225281 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa7e0551-c005-4812-b91c-9913814f747b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.225314 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/320ded89-e851-4a10-897f-bcb7127b6d22-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"320ded89-e851-4a10-897f-bcb7127b6d22\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.225331 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa7e0551-c005-4812-b91c-9913814f747b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.225355 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plpw2\" (UniqueName: \"kubernetes.io/projected/320ded89-e851-4a10-897f-bcb7127b6d22-kube-api-access-plpw2\") pod \"watcher-kuttl-applier-0\" (UID: \"320ded89-e851-4a10-897f-bcb7127b6d22\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.329550 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/320ded89-e851-4a10-897f-bcb7127b6d22-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"320ded89-e851-4a10-897f-bcb7127b6d22\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.329635 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcxqk\" (UniqueName: \"kubernetes.io/projected/fa7e0551-c005-4812-b91c-9913814f747b-kube-api-access-jcxqk\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.329671 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fa7e0551-c005-4812-b91c-9913814f747b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.329689 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa7e0551-c005-4812-b91c-9913814f747b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.329712 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9b519b57-d380-444e-863b-b42e4d4eb06c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.329762 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b519b57-d380-444e-863b-b42e4d4eb06c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.329777 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c9k8\" (UniqueName: \"kubernetes.io/projected/9b519b57-d380-444e-863b-b42e4d4eb06c-kube-api-access-6c9k8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.329796 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b519b57-d380-444e-863b-b42e4d4eb06c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.329810 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/320ded89-e851-4a10-897f-bcb7127b6d22-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"320ded89-e851-4a10-897f-bcb7127b6d22\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.329830 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa7e0551-c005-4812-b91c-9913814f747b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.329854 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plpw2\" (UniqueName: \"kubernetes.io/projected/320ded89-e851-4a10-897f-bcb7127b6d22-kube-api-access-plpw2\") pod \"watcher-kuttl-applier-0\" (UID: \"320ded89-e851-4a10-897f-bcb7127b6d22\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.329878 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa7e0551-c005-4812-b91c-9913814f747b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.329917 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/320ded89-e851-4a10-897f-bcb7127b6d22-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"320ded89-e851-4a10-897f-bcb7127b6d22\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.329933 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b519b57-d380-444e-863b-b42e4d4eb06c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.330861 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/320ded89-e851-4a10-897f-bcb7127b6d22-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"320ded89-e851-4a10-897f-bcb7127b6d22\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.331745 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa7e0551-c005-4812-b91c-9913814f747b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.338963 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.339303 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.339934 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fa7e0551-c005-4812-b91c-9913814f747b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.342836 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/320ded89-e851-4a10-897f-bcb7127b6d22-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"320ded89-e851-4a10-897f-bcb7127b6d22\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.350307 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa7e0551-c005-4812-b91c-9913814f747b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.355058 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa7e0551-c005-4812-b91c-9913814f747b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.362771 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/320ded89-e851-4a10-897f-bcb7127b6d22-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"320ded89-e851-4a10-897f-bcb7127b6d22\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.370871 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcxqk\" (UniqueName: \"kubernetes.io/projected/fa7e0551-c005-4812-b91c-9913814f747b-kube-api-access-jcxqk\") pod \"watcher-kuttl-api-0\" (UID: \"fa7e0551-c005-4812-b91c-9913814f747b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.371582 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plpw2\" (UniqueName: \"kubernetes.io/projected/320ded89-e851-4a10-897f-bcb7127b6d22-kube-api-access-plpw2\") pod \"watcher-kuttl-applier-0\" (UID: \"320ded89-e851-4a10-897f-bcb7127b6d22\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.403072 4814 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-qplrg" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.403330 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.403657 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.430935 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b519b57-d380-444e-863b-b42e4d4eb06c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.431072 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9b519b57-d380-444e-863b-b42e4d4eb06c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.431114 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b519b57-d380-444e-863b-b42e4d4eb06c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.431138 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c9k8\" (UniqueName: \"kubernetes.io/projected/9b519b57-d380-444e-863b-b42e4d4eb06c-kube-api-access-6c9k8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.431165 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b519b57-d380-444e-863b-b42e4d4eb06c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.431624 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b519b57-d380-444e-863b-b42e4d4eb06c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.440105 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b519b57-d380-444e-863b-b42e4d4eb06c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.440685 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9b519b57-d380-444e-863b-b42e4d4eb06c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.440717 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b519b57-d380-444e-863b-b42e4d4eb06c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.458808 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c9k8\" (UniqueName: \"kubernetes.io/projected/9b519b57-d380-444e-863b-b42e4d4eb06c-kube-api-access-6c9k8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9b519b57-d380-444e-863b-b42e4d4eb06c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.555871 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.904188 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 15:16:58 crc kubenswrapper[4814]: I1202 15:16:58.993626 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 15:16:59 crc kubenswrapper[4814]: W1202 15:16:59.003199 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod320ded89_e851_4a10_897f_bcb7127b6d22.slice/crio-7cefee93573bf9dcf254e2416144586cc808df61079dc9fd910d4a45162ff802 WatchSource:0}: Error finding container 7cefee93573bf9dcf254e2416144586cc808df61079dc9fd910d4a45162ff802: Status 404 returned error can't find the container with id 7cefee93573bf9dcf254e2416144586cc808df61079dc9fd910d4a45162ff802 Dec 02 15:16:59 crc kubenswrapper[4814]: I1202 15:16:59.097034 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 15:16:59 crc kubenswrapper[4814]: I1202 15:16:59.448680 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fa7e0551-c005-4812-b91c-9913814f747b","Type":"ContainerStarted","Data":"e9b7301a346657afd14e164b79acad28feb0bbe823ed60ddaa90a11d765557fb"} Dec 02 15:16:59 crc kubenswrapper[4814]: I1202 15:16:59.449015 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fa7e0551-c005-4812-b91c-9913814f747b","Type":"ContainerStarted","Data":"f69ec70dc084218690e02271017bf49f809e0930425f5bff28ed40daf22acfcb"} Dec 02 15:16:59 crc kubenswrapper[4814]: I1202 15:16:59.449028 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fa7e0551-c005-4812-b91c-9913814f747b","Type":"ContainerStarted","Data":"b3c30613216c7a8481c0d393f7139c8352c392d5d215c81566ac61dd1b74329c"} Dec 02 15:16:59 crc kubenswrapper[4814]: I1202 15:16:59.449044 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:16:59 crc kubenswrapper[4814]: I1202 15:16:59.450499 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"320ded89-e851-4a10-897f-bcb7127b6d22","Type":"ContainerStarted","Data":"6409b463e9a3dba71f7d6ed0189c3308d352c905a5b38751fed674877200c231"} Dec 02 15:16:59 crc kubenswrapper[4814]: I1202 15:16:59.450539 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"320ded89-e851-4a10-897f-bcb7127b6d22","Type":"ContainerStarted","Data":"7cefee93573bf9dcf254e2416144586cc808df61079dc9fd910d4a45162ff802"} Dec 02 15:16:59 crc kubenswrapper[4814]: I1202 15:16:59.456437 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerStarted","Data":"75c65295f81b3f09af097b1f68a4878271c3c7eca869abcda7bc7b41d5c31f7e"} Dec 02 15:16:59 crc kubenswrapper[4814]: I1202 15:16:59.456516 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerStarted","Data":"d639b23af16a40eeed235a02de31469045a4b454338847dd7761eda63f82b016"} Dec 02 15:16:59 crc kubenswrapper[4814]: I1202 15:16:59.472043 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.472027987 podStartE2EDuration="2.472027987s" podCreationTimestamp="2025-12-02 15:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:59.467202198 +0000 UTC m=+2881.591564402" watchObservedRunningTime="2025-12-02 15:16:59.472027987 +0000 UTC m=+2881.596390191" Dec 02 15:16:59 crc kubenswrapper[4814]: I1202 15:16:59.512957 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.512938866 podStartE2EDuration="1.512938866s" podCreationTimestamp="2025-12-02 15:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:59.512161327 +0000 UTC m=+2881.636523531" watchObservedRunningTime="2025-12-02 15:16:59.512938866 +0000 UTC m=+2881.637301070" Dec 02 15:16:59 crc kubenswrapper[4814]: I1202 15:16:59.513515 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.51351087 podStartE2EDuration="1.51351087s" podCreationTimestamp="2025-12-02 15:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:59.490281417 +0000 UTC m=+2881.614643611" watchObservedRunningTime="2025-12-02 15:16:59.51351087 +0000 UTC m=+2881.637873074" Dec 02 15:17:02 crc kubenswrapper[4814]: I1202 15:17:02.022060 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:17:02 crc kubenswrapper[4814]: I1202 15:17:02.480984 4814 generic.go:334] "Generic (PLEG): container finished" podID="9b519b57-d380-444e-863b-b42e4d4eb06c" containerID="75c65295f81b3f09af097b1f68a4878271c3c7eca869abcda7bc7b41d5c31f7e" exitCode=1 Dec 02 15:17:02 crc kubenswrapper[4814]: I1202 15:17:02.481092 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerDied","Data":"75c65295f81b3f09af097b1f68a4878271c3c7eca869abcda7bc7b41d5c31f7e"} Dec 02 15:17:02 crc kubenswrapper[4814]: I1202 15:17:02.481908 4814 scope.go:117] "RemoveContainer" containerID="75c65295f81b3f09af097b1f68a4878271c3c7eca869abcda7bc7b41d5c31f7e" Dec 02 15:17:03 crc kubenswrapper[4814]: I1202 15:17:03.404932 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:17:03 crc kubenswrapper[4814]: I1202 15:17:03.405332 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:17:03 crc kubenswrapper[4814]: I1202 15:17:03.491152 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerStarted","Data":"69da43b5a5649e0e2deb51e5fe58aea87e03f4fc3d3eebd2cb72d9578e21f757"} Dec 02 15:17:04 crc kubenswrapper[4814]: I1202 15:17:04.131337 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-54m7d"] Dec 02 15:17:04 crc kubenswrapper[4814]: I1202 15:17:04.144994 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:04 crc kubenswrapper[4814]: I1202 15:17:04.191408 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-54m7d"] Dec 02 15:17:04 crc kubenswrapper[4814]: I1202 15:17:04.336512 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm6r5\" (UniqueName: \"kubernetes.io/projected/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-kube-api-access-vm6r5\") pod \"redhat-marketplace-54m7d\" (UID: \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\") " pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:04 crc kubenswrapper[4814]: I1202 15:17:04.336628 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-catalog-content\") pod \"redhat-marketplace-54m7d\" (UID: \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\") " pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:04 crc kubenswrapper[4814]: I1202 15:17:04.336650 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-utilities\") pod \"redhat-marketplace-54m7d\" (UID: \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\") " pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:04 crc kubenswrapper[4814]: I1202 15:17:04.438469 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-catalog-content\") pod \"redhat-marketplace-54m7d\" (UID: \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\") " pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:04 crc kubenswrapper[4814]: I1202 15:17:04.438516 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-utilities\") pod \"redhat-marketplace-54m7d\" (UID: \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\") " pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:04 crc kubenswrapper[4814]: I1202 15:17:04.438601 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm6r5\" (UniqueName: \"kubernetes.io/projected/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-kube-api-access-vm6r5\") pod \"redhat-marketplace-54m7d\" (UID: \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\") " pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:04 crc kubenswrapper[4814]: I1202 15:17:04.439042 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-utilities\") pod \"redhat-marketplace-54m7d\" (UID: \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\") " pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:04 crc kubenswrapper[4814]: I1202 15:17:04.439036 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-catalog-content\") pod \"redhat-marketplace-54m7d\" (UID: \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\") " pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:04 crc kubenswrapper[4814]: I1202 15:17:04.458349 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm6r5\" (UniqueName: \"kubernetes.io/projected/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-kube-api-access-vm6r5\") pod \"redhat-marketplace-54m7d\" (UID: \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\") " pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:04 crc kubenswrapper[4814]: I1202 15:17:04.492771 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:04 crc kubenswrapper[4814]: I1202 15:17:04.974698 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-54m7d"] Dec 02 15:17:04 crc kubenswrapper[4814]: W1202 15:17:04.977369 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3324d1b_eda6_4fd8_9e14_fb52799ea71f.slice/crio-65e648a9abbc811e9b39899e4c608c6e50f3b5f8e4d36f8adb9c832b3596972f WatchSource:0}: Error finding container 65e648a9abbc811e9b39899e4c608c6e50f3b5f8e4d36f8adb9c832b3596972f: Status 404 returned error can't find the container with id 65e648a9abbc811e9b39899e4c608c6e50f3b5f8e4d36f8adb9c832b3596972f Dec 02 15:17:05 crc kubenswrapper[4814]: I1202 15:17:05.521504 4814 generic.go:334] "Generic (PLEG): container finished" podID="b3324d1b-eda6-4fd8-9e14-fb52799ea71f" containerID="26426fee3a401b3cf0c3ba4dd6be583710799e9601a5345acf1c6b8f41b48390" exitCode=0 Dec 02 15:17:05 crc kubenswrapper[4814]: I1202 15:17:05.521658 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54m7d" event={"ID":"b3324d1b-eda6-4fd8-9e14-fb52799ea71f","Type":"ContainerDied","Data":"26426fee3a401b3cf0c3ba4dd6be583710799e9601a5345acf1c6b8f41b48390"} Dec 02 15:17:05 crc kubenswrapper[4814]: I1202 15:17:05.521845 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54m7d" event={"ID":"b3324d1b-eda6-4fd8-9e14-fb52799ea71f","Type":"ContainerStarted","Data":"65e648a9abbc811e9b39899e4c608c6e50f3b5f8e4d36f8adb9c832b3596972f"} Dec 02 15:17:06 crc kubenswrapper[4814]: I1202 15:17:06.530786 4814 generic.go:334] "Generic (PLEG): container finished" podID="9b519b57-d380-444e-863b-b42e4d4eb06c" containerID="69da43b5a5649e0e2deb51e5fe58aea87e03f4fc3d3eebd2cb72d9578e21f757" exitCode=1 Dec 02 15:17:06 crc kubenswrapper[4814]: I1202 15:17:06.530858 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerDied","Data":"69da43b5a5649e0e2deb51e5fe58aea87e03f4fc3d3eebd2cb72d9578e21f757"} Dec 02 15:17:06 crc kubenswrapper[4814]: I1202 15:17:06.531101 4814 scope.go:117] "RemoveContainer" containerID="75c65295f81b3f09af097b1f68a4878271c3c7eca869abcda7bc7b41d5c31f7e" Dec 02 15:17:06 crc kubenswrapper[4814]: I1202 15:17:06.531729 4814 scope.go:117] "RemoveContainer" containerID="69da43b5a5649e0e2deb51e5fe58aea87e03f4fc3d3eebd2cb72d9578e21f757" Dec 02 15:17:06 crc kubenswrapper[4814]: E1202 15:17:06.531930 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:17:07 crc kubenswrapper[4814]: I1202 15:17:07.541755 4814 generic.go:334] "Generic (PLEG): container finished" podID="b3324d1b-eda6-4fd8-9e14-fb52799ea71f" containerID="39e28a9cd5d10f0a716933b39fdd4f59086f0ebb7ea66d5a3623882a32b3482e" exitCode=0 Dec 02 15:17:07 crc kubenswrapper[4814]: I1202 15:17:07.542123 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54m7d" event={"ID":"b3324d1b-eda6-4fd8-9e14-fb52799ea71f","Type":"ContainerDied","Data":"39e28a9cd5d10f0a716933b39fdd4f59086f0ebb7ea66d5a3623882a32b3482e"} Dec 02 15:17:08 crc kubenswrapper[4814]: I1202 15:17:08.405138 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:17:08 crc kubenswrapper[4814]: I1202 15:17:08.406929 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:17:08 crc kubenswrapper[4814]: I1202 15:17:08.412766 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:17:08 crc kubenswrapper[4814]: I1202 15:17:08.437378 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:17:08 crc kubenswrapper[4814]: I1202 15:17:08.558094 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54m7d" event={"ID":"b3324d1b-eda6-4fd8-9e14-fb52799ea71f","Type":"ContainerStarted","Data":"9f67290bde44519690b795743ee497435e4e8890cda02d9e9e715e6657e690a4"} Dec 02 15:17:08 crc kubenswrapper[4814]: I1202 15:17:08.559009 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:17:08 crc kubenswrapper[4814]: I1202 15:17:08.559032 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:17:08 crc kubenswrapper[4814]: I1202 15:17:08.560051 4814 scope.go:117] "RemoveContainer" containerID="69da43b5a5649e0e2deb51e5fe58aea87e03f4fc3d3eebd2cb72d9578e21f757" Dec 02 15:17:08 crc kubenswrapper[4814]: E1202 15:17:08.560273 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:17:08 crc kubenswrapper[4814]: I1202 15:17:08.580673 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 15:17:08 crc kubenswrapper[4814]: I1202 15:17:08.598832 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-54m7d" podStartSLOduration=1.922141477 podStartE2EDuration="4.59881496s" podCreationTimestamp="2025-12-02 15:17:04 +0000 UTC" firstStartedPulling="2025-12-02 15:17:05.525258151 +0000 UTC m=+2887.649620355" lastFinishedPulling="2025-12-02 15:17:08.201931624 +0000 UTC m=+2890.326293838" observedRunningTime="2025-12-02 15:17:08.593489318 +0000 UTC m=+2890.717851532" watchObservedRunningTime="2025-12-02 15:17:08.59881496 +0000 UTC m=+2890.723177164" Dec 02 15:17:08 crc kubenswrapper[4814]: I1202 15:17:08.730703 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 15:17:11 crc kubenswrapper[4814]: I1202 15:17:11.912612 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hx5mg"] Dec 02 15:17:11 crc kubenswrapper[4814]: I1202 15:17:11.914720 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:11 crc kubenswrapper[4814]: I1202 15:17:11.923730 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hx5mg"] Dec 02 15:17:11 crc kubenswrapper[4814]: I1202 15:17:11.958619 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pctn7\" (UniqueName: \"kubernetes.io/projected/2833ecfa-101a-4366-bf29-78982d939334-kube-api-access-pctn7\") pod \"community-operators-hx5mg\" (UID: \"2833ecfa-101a-4366-bf29-78982d939334\") " pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:11 crc kubenswrapper[4814]: I1202 15:17:11.958845 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2833ecfa-101a-4366-bf29-78982d939334-utilities\") pod \"community-operators-hx5mg\" (UID: \"2833ecfa-101a-4366-bf29-78982d939334\") " pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:11 crc kubenswrapper[4814]: I1202 15:17:11.958882 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2833ecfa-101a-4366-bf29-78982d939334-catalog-content\") pod \"community-operators-hx5mg\" (UID: \"2833ecfa-101a-4366-bf29-78982d939334\") " pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:12 crc kubenswrapper[4814]: I1202 15:17:12.059999 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2833ecfa-101a-4366-bf29-78982d939334-utilities\") pod \"community-operators-hx5mg\" (UID: \"2833ecfa-101a-4366-bf29-78982d939334\") " pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:12 crc kubenswrapper[4814]: I1202 15:17:12.060061 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2833ecfa-101a-4366-bf29-78982d939334-catalog-content\") pod \"community-operators-hx5mg\" (UID: \"2833ecfa-101a-4366-bf29-78982d939334\") " pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:12 crc kubenswrapper[4814]: I1202 15:17:12.060104 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pctn7\" (UniqueName: \"kubernetes.io/projected/2833ecfa-101a-4366-bf29-78982d939334-kube-api-access-pctn7\") pod \"community-operators-hx5mg\" (UID: \"2833ecfa-101a-4366-bf29-78982d939334\") " pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:12 crc kubenswrapper[4814]: I1202 15:17:12.060600 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2833ecfa-101a-4366-bf29-78982d939334-utilities\") pod \"community-operators-hx5mg\" (UID: \"2833ecfa-101a-4366-bf29-78982d939334\") " pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:12 crc kubenswrapper[4814]: I1202 15:17:12.060635 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2833ecfa-101a-4366-bf29-78982d939334-catalog-content\") pod \"community-operators-hx5mg\" (UID: \"2833ecfa-101a-4366-bf29-78982d939334\") " pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:12 crc kubenswrapper[4814]: I1202 15:17:12.092027 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pctn7\" (UniqueName: \"kubernetes.io/projected/2833ecfa-101a-4366-bf29-78982d939334-kube-api-access-pctn7\") pod \"community-operators-hx5mg\" (UID: \"2833ecfa-101a-4366-bf29-78982d939334\") " pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:12 crc kubenswrapper[4814]: I1202 15:17:12.244243 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:12 crc kubenswrapper[4814]: I1202 15:17:12.852682 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hx5mg"] Dec 02 15:17:12 crc kubenswrapper[4814]: W1202 15:17:12.860525 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2833ecfa_101a_4366_bf29_78982d939334.slice/crio-c41972e21229282221fa72d1fd6b9aab845280ce771a46202ce97be1e737045c WatchSource:0}: Error finding container c41972e21229282221fa72d1fd6b9aab845280ce771a46202ce97be1e737045c: Status 404 returned error can't find the container with id c41972e21229282221fa72d1fd6b9aab845280ce771a46202ce97be1e737045c Dec 02 15:17:13 crc kubenswrapper[4814]: I1202 15:17:13.617599 4814 generic.go:334] "Generic (PLEG): container finished" podID="2833ecfa-101a-4366-bf29-78982d939334" containerID="90c6ee14da6a0fcccd426ece8f23dc94ea53f3e361d5874667c5e69dd895f240" exitCode=0 Dec 02 15:17:13 crc kubenswrapper[4814]: I1202 15:17:13.617914 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hx5mg" event={"ID":"2833ecfa-101a-4366-bf29-78982d939334","Type":"ContainerDied","Data":"90c6ee14da6a0fcccd426ece8f23dc94ea53f3e361d5874667c5e69dd895f240"} Dec 02 15:17:13 crc kubenswrapper[4814]: I1202 15:17:13.619403 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hx5mg" event={"ID":"2833ecfa-101a-4366-bf29-78982d939334","Type":"ContainerStarted","Data":"c41972e21229282221fa72d1fd6b9aab845280ce771a46202ce97be1e737045c"} Dec 02 15:17:14 crc kubenswrapper[4814]: I1202 15:17:14.493853 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:14 crc kubenswrapper[4814]: I1202 15:17:14.493916 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:14 crc kubenswrapper[4814]: I1202 15:17:14.545152 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:14 crc kubenswrapper[4814]: I1202 15:17:14.683053 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:15 crc kubenswrapper[4814]: I1202 15:17:15.639130 4814 generic.go:334] "Generic (PLEG): container finished" podID="2833ecfa-101a-4366-bf29-78982d939334" containerID="6ec236e419c39b957cb396335cf326d00d3e64937f9787568ceeb008fab90e4c" exitCode=0 Dec 02 15:17:15 crc kubenswrapper[4814]: I1202 15:17:15.641356 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hx5mg" event={"ID":"2833ecfa-101a-4366-bf29-78982d939334","Type":"ContainerDied","Data":"6ec236e419c39b957cb396335cf326d00d3e64937f9787568ceeb008fab90e4c"} Dec 02 15:17:16 crc kubenswrapper[4814]: I1202 15:17:16.684503 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hx5mg" event={"ID":"2833ecfa-101a-4366-bf29-78982d939334","Type":"ContainerStarted","Data":"8ec95f8b999ada9fb7ce2c1b63bbf3820d32861f46f093a4e466114e58c83b53"} Dec 02 15:17:16 crc kubenswrapper[4814]: I1202 15:17:16.710568 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hx5mg" podStartSLOduration=3.13239892 podStartE2EDuration="5.710546823s" podCreationTimestamp="2025-12-02 15:17:11 +0000 UTC" firstStartedPulling="2025-12-02 15:17:13.620013855 +0000 UTC m=+2895.744376059" lastFinishedPulling="2025-12-02 15:17:16.198161768 +0000 UTC m=+2898.322523962" observedRunningTime="2025-12-02 15:17:16.704173456 +0000 UTC m=+2898.828535660" watchObservedRunningTime="2025-12-02 15:17:16.710546823 +0000 UTC m=+2898.834909027" Dec 02 15:17:18 crc kubenswrapper[4814]: I1202 15:17:18.302108 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-54m7d"] Dec 02 15:17:18 crc kubenswrapper[4814]: I1202 15:17:18.302360 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-54m7d" podUID="b3324d1b-eda6-4fd8-9e14-fb52799ea71f" containerName="registry-server" containerID="cri-o://9f67290bde44519690b795743ee497435e4e8890cda02d9e9e715e6657e690a4" gracePeriod=2 Dec 02 15:17:18 crc kubenswrapper[4814]: I1202 15:17:18.728129 4814 generic.go:334] "Generic (PLEG): container finished" podID="b3324d1b-eda6-4fd8-9e14-fb52799ea71f" containerID="9f67290bde44519690b795743ee497435e4e8890cda02d9e9e715e6657e690a4" exitCode=0 Dec 02 15:17:18 crc kubenswrapper[4814]: I1202 15:17:18.728367 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54m7d" event={"ID":"b3324d1b-eda6-4fd8-9e14-fb52799ea71f","Type":"ContainerDied","Data":"9f67290bde44519690b795743ee497435e4e8890cda02d9e9e715e6657e690a4"} Dec 02 15:17:18 crc kubenswrapper[4814]: I1202 15:17:18.820666 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:18 crc kubenswrapper[4814]: I1202 15:17:18.929169 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-catalog-content\") pod \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\" (UID: \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\") " Dec 02 15:17:18 crc kubenswrapper[4814]: I1202 15:17:18.929517 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-utilities\") pod \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\" (UID: \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\") " Dec 02 15:17:18 crc kubenswrapper[4814]: I1202 15:17:18.929720 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm6r5\" (UniqueName: \"kubernetes.io/projected/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-kube-api-access-vm6r5\") pod \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\" (UID: \"b3324d1b-eda6-4fd8-9e14-fb52799ea71f\") " Dec 02 15:17:18 crc kubenswrapper[4814]: I1202 15:17:18.930284 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-utilities" (OuterVolumeSpecName: "utilities") pod "b3324d1b-eda6-4fd8-9e14-fb52799ea71f" (UID: "b3324d1b-eda6-4fd8-9e14-fb52799ea71f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:17:18 crc kubenswrapper[4814]: I1202 15:17:18.931111 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:18 crc kubenswrapper[4814]: I1202 15:17:18.936816 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-kube-api-access-vm6r5" (OuterVolumeSpecName: "kube-api-access-vm6r5") pod "b3324d1b-eda6-4fd8-9e14-fb52799ea71f" (UID: "b3324d1b-eda6-4fd8-9e14-fb52799ea71f"). InnerVolumeSpecName "kube-api-access-vm6r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:18 crc kubenswrapper[4814]: I1202 15:17:18.948112 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3324d1b-eda6-4fd8-9e14-fb52799ea71f" (UID: "b3324d1b-eda6-4fd8-9e14-fb52799ea71f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:17:19 crc kubenswrapper[4814]: I1202 15:17:19.032619 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm6r5\" (UniqueName: \"kubernetes.io/projected/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-kube-api-access-vm6r5\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:19 crc kubenswrapper[4814]: I1202 15:17:19.032861 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3324d1b-eda6-4fd8-9e14-fb52799ea71f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:19 crc kubenswrapper[4814]: I1202 15:17:19.144945 4814 scope.go:117] "RemoveContainer" containerID="69da43b5a5649e0e2deb51e5fe58aea87e03f4fc3d3eebd2cb72d9578e21f757" Dec 02 15:17:19 crc kubenswrapper[4814]: I1202 15:17:19.738602 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54m7d" event={"ID":"b3324d1b-eda6-4fd8-9e14-fb52799ea71f","Type":"ContainerDied","Data":"65e648a9abbc811e9b39899e4c608c6e50f3b5f8e4d36f8adb9c832b3596972f"} Dec 02 15:17:19 crc kubenswrapper[4814]: I1202 15:17:19.739721 4814 scope.go:117] "RemoveContainer" containerID="9f67290bde44519690b795743ee497435e4e8890cda02d9e9e715e6657e690a4" Dec 02 15:17:19 crc kubenswrapper[4814]: I1202 15:17:19.738626 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54m7d" Dec 02 15:17:19 crc kubenswrapper[4814]: I1202 15:17:19.740498 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerStarted","Data":"deb764f82d67624cd2058abe5eb571bebea960c7727e77a1ece5597c35b9e633"} Dec 02 15:17:19 crc kubenswrapper[4814]: I1202 15:17:19.791411 4814 scope.go:117] "RemoveContainer" containerID="39e28a9cd5d10f0a716933b39fdd4f59086f0ebb7ea66d5a3623882a32b3482e" Dec 02 15:17:19 crc kubenswrapper[4814]: I1202 15:17:19.803626 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-54m7d"] Dec 02 15:17:19 crc kubenswrapper[4814]: I1202 15:17:19.809826 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-54m7d"] Dec 02 15:17:19 crc kubenswrapper[4814]: I1202 15:17:19.833388 4814 scope.go:117] "RemoveContainer" containerID="26426fee3a401b3cf0c3ba4dd6be583710799e9601a5345acf1c6b8f41b48390" Dec 02 15:17:20 crc kubenswrapper[4814]: I1202 15:17:20.157316 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3324d1b-eda6-4fd8-9e14-fb52799ea71f" path="/var/lib/kubelet/pods/b3324d1b-eda6-4fd8-9e14-fb52799ea71f/volumes" Dec 02 15:17:22 crc kubenswrapper[4814]: I1202 15:17:22.244462 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:22 crc kubenswrapper[4814]: I1202 15:17:22.244827 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:22 crc kubenswrapper[4814]: I1202 15:17:22.296298 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:22 crc kubenswrapper[4814]: I1202 15:17:22.769825 4814 generic.go:334] "Generic (PLEG): container finished" podID="9b519b57-d380-444e-863b-b42e4d4eb06c" containerID="deb764f82d67624cd2058abe5eb571bebea960c7727e77a1ece5597c35b9e633" exitCode=1 Dec 02 15:17:22 crc kubenswrapper[4814]: I1202 15:17:22.769908 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerDied","Data":"deb764f82d67624cd2058abe5eb571bebea960c7727e77a1ece5597c35b9e633"} Dec 02 15:17:22 crc kubenswrapper[4814]: I1202 15:17:22.769967 4814 scope.go:117] "RemoveContainer" containerID="69da43b5a5649e0e2deb51e5fe58aea87e03f4fc3d3eebd2cb72d9578e21f757" Dec 02 15:17:22 crc kubenswrapper[4814]: I1202 15:17:22.771299 4814 scope.go:117] "RemoveContainer" containerID="deb764f82d67624cd2058abe5eb571bebea960c7727e77a1ece5597c35b9e633" Dec 02 15:17:22 crc kubenswrapper[4814]: E1202 15:17:22.771536 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:17:22 crc kubenswrapper[4814]: I1202 15:17:22.832203 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:24 crc kubenswrapper[4814]: I1202 15:17:24.906531 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hx5mg"] Dec 02 15:17:24 crc kubenswrapper[4814]: I1202 15:17:24.907949 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hx5mg" podUID="2833ecfa-101a-4366-bf29-78982d939334" containerName="registry-server" containerID="cri-o://8ec95f8b999ada9fb7ce2c1b63bbf3820d32861f46f093a4e466114e58c83b53" gracePeriod=2 Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.350613 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.440409 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2833ecfa-101a-4366-bf29-78982d939334-utilities\") pod \"2833ecfa-101a-4366-bf29-78982d939334\" (UID: \"2833ecfa-101a-4366-bf29-78982d939334\") " Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.440517 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pctn7\" (UniqueName: \"kubernetes.io/projected/2833ecfa-101a-4366-bf29-78982d939334-kube-api-access-pctn7\") pod \"2833ecfa-101a-4366-bf29-78982d939334\" (UID: \"2833ecfa-101a-4366-bf29-78982d939334\") " Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.440621 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2833ecfa-101a-4366-bf29-78982d939334-catalog-content\") pod \"2833ecfa-101a-4366-bf29-78982d939334\" (UID: \"2833ecfa-101a-4366-bf29-78982d939334\") " Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.441278 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2833ecfa-101a-4366-bf29-78982d939334-utilities" (OuterVolumeSpecName: "utilities") pod "2833ecfa-101a-4366-bf29-78982d939334" (UID: "2833ecfa-101a-4366-bf29-78982d939334"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.447424 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2833ecfa-101a-4366-bf29-78982d939334-kube-api-access-pctn7" (OuterVolumeSpecName: "kube-api-access-pctn7") pod "2833ecfa-101a-4366-bf29-78982d939334" (UID: "2833ecfa-101a-4366-bf29-78982d939334"). InnerVolumeSpecName "kube-api-access-pctn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.510485 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2833ecfa-101a-4366-bf29-78982d939334-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2833ecfa-101a-4366-bf29-78982d939334" (UID: "2833ecfa-101a-4366-bf29-78982d939334"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.542134 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pctn7\" (UniqueName: \"kubernetes.io/projected/2833ecfa-101a-4366-bf29-78982d939334-kube-api-access-pctn7\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.542169 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2833ecfa-101a-4366-bf29-78982d939334-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.542179 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2833ecfa-101a-4366-bf29-78982d939334-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.799065 4814 generic.go:334] "Generic (PLEG): container finished" podID="2833ecfa-101a-4366-bf29-78982d939334" containerID="8ec95f8b999ada9fb7ce2c1b63bbf3820d32861f46f093a4e466114e58c83b53" exitCode=0 Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.799125 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hx5mg" event={"ID":"2833ecfa-101a-4366-bf29-78982d939334","Type":"ContainerDied","Data":"8ec95f8b999ada9fb7ce2c1b63bbf3820d32861f46f093a4e466114e58c83b53"} Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.799248 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hx5mg" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.799289 4814 scope.go:117] "RemoveContainer" containerID="8ec95f8b999ada9fb7ce2c1b63bbf3820d32861f46f093a4e466114e58c83b53" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.799271 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hx5mg" event={"ID":"2833ecfa-101a-4366-bf29-78982d939334","Type":"ContainerDied","Data":"c41972e21229282221fa72d1fd6b9aab845280ce771a46202ce97be1e737045c"} Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.819819 4814 scope.go:117] "RemoveContainer" containerID="6ec236e419c39b957cb396335cf326d00d3e64937f9787568ceeb008fab90e4c" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.857189 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hx5mg"] Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.867705 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hx5mg"] Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.879453 4814 scope.go:117] "RemoveContainer" containerID="90c6ee14da6a0fcccd426ece8f23dc94ea53f3e361d5874667c5e69dd895f240" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.899218 4814 scope.go:117] "RemoveContainer" containerID="8ec95f8b999ada9fb7ce2c1b63bbf3820d32861f46f093a4e466114e58c83b53" Dec 02 15:17:25 crc kubenswrapper[4814]: E1202 15:17:25.899662 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ec95f8b999ada9fb7ce2c1b63bbf3820d32861f46f093a4e466114e58c83b53\": container with ID starting with 8ec95f8b999ada9fb7ce2c1b63bbf3820d32861f46f093a4e466114e58c83b53 not found: ID does not exist" containerID="8ec95f8b999ada9fb7ce2c1b63bbf3820d32861f46f093a4e466114e58c83b53" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.899701 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec95f8b999ada9fb7ce2c1b63bbf3820d32861f46f093a4e466114e58c83b53"} err="failed to get container status \"8ec95f8b999ada9fb7ce2c1b63bbf3820d32861f46f093a4e466114e58c83b53\": rpc error: code = NotFound desc = could not find container \"8ec95f8b999ada9fb7ce2c1b63bbf3820d32861f46f093a4e466114e58c83b53\": container with ID starting with 8ec95f8b999ada9fb7ce2c1b63bbf3820d32861f46f093a4e466114e58c83b53 not found: ID does not exist" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.899727 4814 scope.go:117] "RemoveContainer" containerID="6ec236e419c39b957cb396335cf326d00d3e64937f9787568ceeb008fab90e4c" Dec 02 15:17:25 crc kubenswrapper[4814]: E1202 15:17:25.899972 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ec236e419c39b957cb396335cf326d00d3e64937f9787568ceeb008fab90e4c\": container with ID starting with 6ec236e419c39b957cb396335cf326d00d3e64937f9787568ceeb008fab90e4c not found: ID does not exist" containerID="6ec236e419c39b957cb396335cf326d00d3e64937f9787568ceeb008fab90e4c" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.899998 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec236e419c39b957cb396335cf326d00d3e64937f9787568ceeb008fab90e4c"} err="failed to get container status \"6ec236e419c39b957cb396335cf326d00d3e64937f9787568ceeb008fab90e4c\": rpc error: code = NotFound desc = could not find container \"6ec236e419c39b957cb396335cf326d00d3e64937f9787568ceeb008fab90e4c\": container with ID starting with 6ec236e419c39b957cb396335cf326d00d3e64937f9787568ceeb008fab90e4c not found: ID does not exist" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.900011 4814 scope.go:117] "RemoveContainer" containerID="90c6ee14da6a0fcccd426ece8f23dc94ea53f3e361d5874667c5e69dd895f240" Dec 02 15:17:25 crc kubenswrapper[4814]: E1202 15:17:25.900229 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90c6ee14da6a0fcccd426ece8f23dc94ea53f3e361d5874667c5e69dd895f240\": container with ID starting with 90c6ee14da6a0fcccd426ece8f23dc94ea53f3e361d5874667c5e69dd895f240 not found: ID does not exist" containerID="90c6ee14da6a0fcccd426ece8f23dc94ea53f3e361d5874667c5e69dd895f240" Dec 02 15:17:25 crc kubenswrapper[4814]: I1202 15:17:25.900262 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90c6ee14da6a0fcccd426ece8f23dc94ea53f3e361d5874667c5e69dd895f240"} err="failed to get container status \"90c6ee14da6a0fcccd426ece8f23dc94ea53f3e361d5874667c5e69dd895f240\": rpc error: code = NotFound desc = could not find container \"90c6ee14da6a0fcccd426ece8f23dc94ea53f3e361d5874667c5e69dd895f240\": container with ID starting with 90c6ee14da6a0fcccd426ece8f23dc94ea53f3e361d5874667c5e69dd895f240 not found: ID does not exist" Dec 02 15:17:26 crc kubenswrapper[4814]: I1202 15:17:26.158030 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2833ecfa-101a-4366-bf29-78982d939334" path="/var/lib/kubelet/pods/2833ecfa-101a-4366-bf29-78982d939334/volumes" Dec 02 15:17:28 crc kubenswrapper[4814]: I1202 15:17:28.557310 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:17:28 crc kubenswrapper[4814]: I1202 15:17:28.557742 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:17:28 crc kubenswrapper[4814]: I1202 15:17:28.557769 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:17:28 crc kubenswrapper[4814]: I1202 15:17:28.557789 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:17:28 crc kubenswrapper[4814]: I1202 15:17:28.558720 4814 scope.go:117] "RemoveContainer" containerID="deb764f82d67624cd2058abe5eb571bebea960c7727e77a1ece5597c35b9e633" Dec 02 15:17:28 crc kubenswrapper[4814]: E1202 15:17:28.559121 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:17:43 crc kubenswrapper[4814]: I1202 15:17:43.145282 4814 scope.go:117] "RemoveContainer" containerID="deb764f82d67624cd2058abe5eb571bebea960c7727e77a1ece5597c35b9e633" Dec 02 15:17:43 crc kubenswrapper[4814]: I1202 15:17:43.981808 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerStarted","Data":"763be05b01a43c97b9a8602c6ac88682860c412f7db5acfab221b9c6f80d0c37"} Dec 02 15:17:46 crc kubenswrapper[4814]: I1202 15:17:46.004917 4814 generic.go:334] "Generic (PLEG): container finished" podID="9b519b57-d380-444e-863b-b42e4d4eb06c" containerID="763be05b01a43c97b9a8602c6ac88682860c412f7db5acfab221b9c6f80d0c37" exitCode=1 Dec 02 15:17:46 crc kubenswrapper[4814]: I1202 15:17:46.004973 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerDied","Data":"763be05b01a43c97b9a8602c6ac88682860c412f7db5acfab221b9c6f80d0c37"} Dec 02 15:17:46 crc kubenswrapper[4814]: I1202 15:17:46.005196 4814 scope.go:117] "RemoveContainer" containerID="deb764f82d67624cd2058abe5eb571bebea960c7727e77a1ece5597c35b9e633" Dec 02 15:17:46 crc kubenswrapper[4814]: I1202 15:17:46.005844 4814 scope.go:117] "RemoveContainer" containerID="763be05b01a43c97b9a8602c6ac88682860c412f7db5acfab221b9c6f80d0c37" Dec 02 15:17:46 crc kubenswrapper[4814]: E1202 15:17:46.006200 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:17:48 crc kubenswrapper[4814]: I1202 15:17:48.556696 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:17:48 crc kubenswrapper[4814]: I1202 15:17:48.557030 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:17:48 crc kubenswrapper[4814]: I1202 15:17:48.557773 4814 scope.go:117] "RemoveContainer" containerID="763be05b01a43c97b9a8602c6ac88682860c412f7db5acfab221b9c6f80d0c37" Dec 02 15:17:48 crc kubenswrapper[4814]: E1202 15:17:48.558112 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:17:58 crc kubenswrapper[4814]: I1202 15:17:58.557519 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:17:58 crc kubenswrapper[4814]: I1202 15:17:58.558039 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:17:58 crc kubenswrapper[4814]: I1202 15:17:58.558642 4814 scope.go:117] "RemoveContainer" containerID="763be05b01a43c97b9a8602c6ac88682860c412f7db5acfab221b9c6f80d0c37" Dec 02 15:17:58 crc kubenswrapper[4814]: E1202 15:17:58.558847 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:18:01 crc kubenswrapper[4814]: I1202 15:18:01.313291 4814 scope.go:117] "RemoveContainer" containerID="a4a0e2f25265a1ed395c39a815ce8c76f93c8fc62d93b44a209de754523c480a" Dec 02 15:18:01 crc kubenswrapper[4814]: I1202 15:18:01.343136 4814 scope.go:117] "RemoveContainer" containerID="5f06092addd2ffaa8a2e24e86ec80ccb1eb293eb7b02ae028dfc5cd7ad54655d" Dec 02 15:18:01 crc kubenswrapper[4814]: I1202 15:18:01.371280 4814 scope.go:117] "RemoveContainer" containerID="4e65909e45e53896a4950996eb412d6892e5690bc1364bb2a1bfc793a84317ed" Dec 02 15:18:01 crc kubenswrapper[4814]: I1202 15:18:01.415053 4814 scope.go:117] "RemoveContainer" containerID="11e955693a5312f39523d5e270f973331738ac38bff4bd99a38b243c38c7f62e" Dec 02 15:18:10 crc kubenswrapper[4814]: I1202 15:18:10.145310 4814 scope.go:117] "RemoveContainer" containerID="763be05b01a43c97b9a8602c6ac88682860c412f7db5acfab221b9c6f80d0c37" Dec 02 15:18:10 crc kubenswrapper[4814]: E1202 15:18:10.146004 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:18:20 crc kubenswrapper[4814]: I1202 15:18:20.651153 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:18:20 crc kubenswrapper[4814]: I1202 15:18:20.651688 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:18:24 crc kubenswrapper[4814]: I1202 15:18:24.144954 4814 scope.go:117] "RemoveContainer" containerID="763be05b01a43c97b9a8602c6ac88682860c412f7db5acfab221b9c6f80d0c37" Dec 02 15:18:24 crc kubenswrapper[4814]: E1202 15:18:24.145664 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:18:35 crc kubenswrapper[4814]: I1202 15:18:35.145758 4814 scope.go:117] "RemoveContainer" containerID="763be05b01a43c97b9a8602c6ac88682860c412f7db5acfab221b9c6f80d0c37" Dec 02 15:18:35 crc kubenswrapper[4814]: I1202 15:18:35.446286 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerStarted","Data":"7436b3b1fa5a1104aa645bdbfdb96b29a02bb7eb65f94a9cab72e7af7f3198a2"} Dec 02 15:18:38 crc kubenswrapper[4814]: I1202 15:18:38.470586 4814 generic.go:334] "Generic (PLEG): container finished" podID="9b519b57-d380-444e-863b-b42e4d4eb06c" containerID="7436b3b1fa5a1104aa645bdbfdb96b29a02bb7eb65f94a9cab72e7af7f3198a2" exitCode=1 Dec 02 15:18:38 crc kubenswrapper[4814]: I1202 15:18:38.470680 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerDied","Data":"7436b3b1fa5a1104aa645bdbfdb96b29a02bb7eb65f94a9cab72e7af7f3198a2"} Dec 02 15:18:38 crc kubenswrapper[4814]: I1202 15:18:38.471070 4814 scope.go:117] "RemoveContainer" containerID="763be05b01a43c97b9a8602c6ac88682860c412f7db5acfab221b9c6f80d0c37" Dec 02 15:18:38 crc kubenswrapper[4814]: I1202 15:18:38.471810 4814 scope.go:117] "RemoveContainer" containerID="7436b3b1fa5a1104aa645bdbfdb96b29a02bb7eb65f94a9cab72e7af7f3198a2" Dec 02 15:18:38 crc kubenswrapper[4814]: E1202 15:18:38.472519 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:18:38 crc kubenswrapper[4814]: I1202 15:18:38.557070 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:18:38 crc kubenswrapper[4814]: I1202 15:18:38.557392 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:18:39 crc kubenswrapper[4814]: I1202 15:18:39.480727 4814 scope.go:117] "RemoveContainer" containerID="7436b3b1fa5a1104aa645bdbfdb96b29a02bb7eb65f94a9cab72e7af7f3198a2" Dec 02 15:18:39 crc kubenswrapper[4814]: E1202 15:18:39.480931 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:18:50 crc kubenswrapper[4814]: I1202 15:18:50.651265 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:18:50 crc kubenswrapper[4814]: I1202 15:18:50.651823 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:18:51 crc kubenswrapper[4814]: I1202 15:18:51.144615 4814 scope.go:117] "RemoveContainer" containerID="7436b3b1fa5a1104aa645bdbfdb96b29a02bb7eb65f94a9cab72e7af7f3198a2" Dec 02 15:18:51 crc kubenswrapper[4814]: E1202 15:18:51.144893 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:18:58 crc kubenswrapper[4814]: I1202 15:18:58.557159 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:18:58 crc kubenswrapper[4814]: I1202 15:18:58.557780 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:18:58 crc kubenswrapper[4814]: I1202 15:18:58.559053 4814 scope.go:117] "RemoveContainer" containerID="7436b3b1fa5a1104aa645bdbfdb96b29a02bb7eb65f94a9cab72e7af7f3198a2" Dec 02 15:18:58 crc kubenswrapper[4814]: E1202 15:18:58.559492 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:19:11 crc kubenswrapper[4814]: I1202 15:19:11.145165 4814 scope.go:117] "RemoveContainer" containerID="7436b3b1fa5a1104aa645bdbfdb96b29a02bb7eb65f94a9cab72e7af7f3198a2" Dec 02 15:19:11 crc kubenswrapper[4814]: E1202 15:19:11.145946 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:19:20 crc kubenswrapper[4814]: I1202 15:19:20.651340 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:19:20 crc kubenswrapper[4814]: I1202 15:19:20.652141 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:19:20 crc kubenswrapper[4814]: I1202 15:19:20.652221 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 15:19:20 crc kubenswrapper[4814]: I1202 15:19:20.653467 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29"} pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:19:20 crc kubenswrapper[4814]: I1202 15:19:20.653578 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" containerID="cri-o://0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" gracePeriod=600 Dec 02 15:19:20 crc kubenswrapper[4814]: E1202 15:19:20.785389 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:19:20 crc kubenswrapper[4814]: I1202 15:19:20.829848 4814 generic.go:334] "Generic (PLEG): container finished" podID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" exitCode=0 Dec 02 15:19:20 crc kubenswrapper[4814]: I1202 15:19:20.829930 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerDied","Data":"0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29"} Dec 02 15:19:20 crc kubenswrapper[4814]: I1202 15:19:20.830316 4814 scope.go:117] "RemoveContainer" containerID="e117518c89c894ad13d9248d279bf1c0f883ff27f102eca10f935a8ccf6c6034" Dec 02 15:19:20 crc kubenswrapper[4814]: I1202 15:19:20.831049 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:19:20 crc kubenswrapper[4814]: E1202 15:19:20.831596 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:19:23 crc kubenswrapper[4814]: I1202 15:19:23.144626 4814 scope.go:117] "RemoveContainer" containerID="7436b3b1fa5a1104aa645bdbfdb96b29a02bb7eb65f94a9cab72e7af7f3198a2" Dec 02 15:19:23 crc kubenswrapper[4814]: E1202 15:19:23.145627 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:19:33 crc kubenswrapper[4814]: I1202 15:19:33.144770 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:19:33 crc kubenswrapper[4814]: E1202 15:19:33.145717 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:19:34 crc kubenswrapper[4814]: I1202 15:19:34.144660 4814 scope.go:117] "RemoveContainer" containerID="7436b3b1fa5a1104aa645bdbfdb96b29a02bb7eb65f94a9cab72e7af7f3198a2" Dec 02 15:19:34 crc kubenswrapper[4814]: E1202 15:19:34.144961 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:19:45 crc kubenswrapper[4814]: I1202 15:19:45.144678 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:19:45 crc kubenswrapper[4814]: E1202 15:19:45.146431 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:19:47 crc kubenswrapper[4814]: I1202 15:19:47.144615 4814 scope.go:117] "RemoveContainer" containerID="7436b3b1fa5a1104aa645bdbfdb96b29a02bb7eb65f94a9cab72e7af7f3198a2" Dec 02 15:19:47 crc kubenswrapper[4814]: E1202 15:19:47.145129 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:19:58 crc kubenswrapper[4814]: I1202 15:19:58.151756 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:19:58 crc kubenswrapper[4814]: E1202 15:19:58.152671 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:19:59 crc kubenswrapper[4814]: I1202 15:19:59.145518 4814 scope.go:117] "RemoveContainer" containerID="7436b3b1fa5a1104aa645bdbfdb96b29a02bb7eb65f94a9cab72e7af7f3198a2" Dec 02 15:20:00 crc kubenswrapper[4814]: I1202 15:20:00.234006 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerStarted","Data":"1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc"} Dec 02 15:20:02 crc kubenswrapper[4814]: I1202 15:20:02.252022 4814 generic.go:334] "Generic (PLEG): container finished" podID="9b519b57-d380-444e-863b-b42e4d4eb06c" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" exitCode=1 Dec 02 15:20:02 crc kubenswrapper[4814]: I1202 15:20:02.252104 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerDied","Data":"1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc"} Dec 02 15:20:02 crc kubenswrapper[4814]: I1202 15:20:02.252384 4814 scope.go:117] "RemoveContainer" containerID="7436b3b1fa5a1104aa645bdbfdb96b29a02bb7eb65f94a9cab72e7af7f3198a2" Dec 02 15:20:02 crc kubenswrapper[4814]: I1202 15:20:02.253071 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:20:02 crc kubenswrapper[4814]: E1202 15:20:02.253318 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:20:08 crc kubenswrapper[4814]: I1202 15:20:08.556962 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:20:08 crc kubenswrapper[4814]: I1202 15:20:08.557627 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:20:08 crc kubenswrapper[4814]: I1202 15:20:08.558278 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:20:08 crc kubenswrapper[4814]: E1202 15:20:08.558625 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:20:09 crc kubenswrapper[4814]: I1202 15:20:09.144636 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:20:09 crc kubenswrapper[4814]: E1202 15:20:09.145193 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:20:22 crc kubenswrapper[4814]: I1202 15:20:22.145400 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:20:22 crc kubenswrapper[4814]: E1202 15:20:22.146200 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:20:24 crc kubenswrapper[4814]: I1202 15:20:24.145122 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:20:24 crc kubenswrapper[4814]: E1202 15:20:24.145709 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:20:28 crc kubenswrapper[4814]: I1202 15:20:28.557128 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:20:28 crc kubenswrapper[4814]: I1202 15:20:28.557481 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:20:28 crc kubenswrapper[4814]: I1202 15:20:28.558178 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:20:28 crc kubenswrapper[4814]: E1202 15:20:28.558528 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:20:34 crc kubenswrapper[4814]: I1202 15:20:34.144560 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:20:34 crc kubenswrapper[4814]: E1202 15:20:34.145511 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:20:43 crc kubenswrapper[4814]: I1202 15:20:43.147540 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:20:43 crc kubenswrapper[4814]: E1202 15:20:43.148707 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:20:49 crc kubenswrapper[4814]: I1202 15:20:49.144319 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:20:49 crc kubenswrapper[4814]: E1202 15:20:49.145160 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:20:54 crc kubenswrapper[4814]: I1202 15:20:54.144968 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:20:54 crc kubenswrapper[4814]: E1202 15:20:54.145873 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:21:02 crc kubenswrapper[4814]: I1202 15:21:02.144959 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:21:02 crc kubenswrapper[4814]: E1202 15:21:02.146042 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:21:09 crc kubenswrapper[4814]: I1202 15:21:09.145473 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:21:09 crc kubenswrapper[4814]: E1202 15:21:09.146721 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:21:16 crc kubenswrapper[4814]: I1202 15:21:16.146001 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:21:16 crc kubenswrapper[4814]: E1202 15:21:16.146934 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:21:24 crc kubenswrapper[4814]: I1202 15:21:24.144544 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:21:24 crc kubenswrapper[4814]: E1202 15:21:24.145300 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:21:30 crc kubenswrapper[4814]: I1202 15:21:30.145819 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:21:30 crc kubenswrapper[4814]: E1202 15:21:30.147189 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:21:39 crc kubenswrapper[4814]: I1202 15:21:39.144462 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:21:39 crc kubenswrapper[4814]: E1202 15:21:39.145123 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:21:42 crc kubenswrapper[4814]: I1202 15:21:42.144654 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:21:42 crc kubenswrapper[4814]: E1202 15:21:42.145155 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:21:53 crc kubenswrapper[4814]: I1202 15:21:53.144996 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:21:53 crc kubenswrapper[4814]: E1202 15:21:53.146165 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:21:56 crc kubenswrapper[4814]: I1202 15:21:56.146999 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:21:56 crc kubenswrapper[4814]: E1202 15:21:56.147434 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:22:07 crc kubenswrapper[4814]: I1202 15:22:07.144703 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:22:07 crc kubenswrapper[4814]: E1202 15:22:07.145571 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:22:10 crc kubenswrapper[4814]: I1202 15:22:10.145101 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:22:10 crc kubenswrapper[4814]: E1202 15:22:10.145715 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:22:14 crc kubenswrapper[4814]: I1202 15:22:14.877651 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jkh4f/must-gather-wk9zz"] Dec 02 15:22:14 crc kubenswrapper[4814]: E1202 15:22:14.878062 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2833ecfa-101a-4366-bf29-78982d939334" containerName="extract-utilities" Dec 02 15:22:14 crc kubenswrapper[4814]: I1202 15:22:14.878078 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2833ecfa-101a-4366-bf29-78982d939334" containerName="extract-utilities" Dec 02 15:22:14 crc kubenswrapper[4814]: E1202 15:22:14.878091 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3324d1b-eda6-4fd8-9e14-fb52799ea71f" containerName="extract-content" Dec 02 15:22:14 crc kubenswrapper[4814]: I1202 15:22:14.878098 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3324d1b-eda6-4fd8-9e14-fb52799ea71f" containerName="extract-content" Dec 02 15:22:14 crc kubenswrapper[4814]: E1202 15:22:14.878111 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3324d1b-eda6-4fd8-9e14-fb52799ea71f" containerName="extract-utilities" Dec 02 15:22:14 crc kubenswrapper[4814]: I1202 15:22:14.878118 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3324d1b-eda6-4fd8-9e14-fb52799ea71f" containerName="extract-utilities" Dec 02 15:22:14 crc kubenswrapper[4814]: E1202 15:22:14.878133 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2833ecfa-101a-4366-bf29-78982d939334" containerName="extract-content" Dec 02 15:22:14 crc kubenswrapper[4814]: I1202 15:22:14.878140 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2833ecfa-101a-4366-bf29-78982d939334" containerName="extract-content" Dec 02 15:22:14 crc kubenswrapper[4814]: E1202 15:22:14.878160 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2833ecfa-101a-4366-bf29-78982d939334" containerName="registry-server" Dec 02 15:22:14 crc kubenswrapper[4814]: I1202 15:22:14.878167 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2833ecfa-101a-4366-bf29-78982d939334" containerName="registry-server" Dec 02 15:22:14 crc kubenswrapper[4814]: E1202 15:22:14.878175 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3324d1b-eda6-4fd8-9e14-fb52799ea71f" containerName="registry-server" Dec 02 15:22:14 crc kubenswrapper[4814]: I1202 15:22:14.878183 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3324d1b-eda6-4fd8-9e14-fb52799ea71f" containerName="registry-server" Dec 02 15:22:14 crc kubenswrapper[4814]: I1202 15:22:14.878383 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="2833ecfa-101a-4366-bf29-78982d939334" containerName="registry-server" Dec 02 15:22:14 crc kubenswrapper[4814]: I1202 15:22:14.878425 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3324d1b-eda6-4fd8-9e14-fb52799ea71f" containerName="registry-server" Dec 02 15:22:14 crc kubenswrapper[4814]: I1202 15:22:14.879486 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkh4f/must-gather-wk9zz" Dec 02 15:22:14 crc kubenswrapper[4814]: I1202 15:22:14.881964 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jkh4f"/"kube-root-ca.crt" Dec 02 15:22:14 crc kubenswrapper[4814]: I1202 15:22:14.882206 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jkh4f"/"openshift-service-ca.crt" Dec 02 15:22:14 crc kubenswrapper[4814]: I1202 15:22:14.984028 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jkh4f/must-gather-wk9zz"] Dec 02 15:22:15 crc kubenswrapper[4814]: I1202 15:22:15.072978 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t2x4\" (UniqueName: \"kubernetes.io/projected/e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0-kube-api-access-5t2x4\") pod \"must-gather-wk9zz\" (UID: \"e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0\") " pod="openshift-must-gather-jkh4f/must-gather-wk9zz" Dec 02 15:22:15 crc kubenswrapper[4814]: I1202 15:22:15.073187 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0-must-gather-output\") pod \"must-gather-wk9zz\" (UID: \"e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0\") " pod="openshift-must-gather-jkh4f/must-gather-wk9zz" Dec 02 15:22:15 crc kubenswrapper[4814]: I1202 15:22:15.175213 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t2x4\" (UniqueName: \"kubernetes.io/projected/e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0-kube-api-access-5t2x4\") pod \"must-gather-wk9zz\" (UID: \"e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0\") " pod="openshift-must-gather-jkh4f/must-gather-wk9zz" Dec 02 15:22:15 crc kubenswrapper[4814]: I1202 15:22:15.175393 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0-must-gather-output\") pod \"must-gather-wk9zz\" (UID: \"e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0\") " pod="openshift-must-gather-jkh4f/must-gather-wk9zz" Dec 02 15:22:15 crc kubenswrapper[4814]: I1202 15:22:15.177793 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0-must-gather-output\") pod \"must-gather-wk9zz\" (UID: \"e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0\") " pod="openshift-must-gather-jkh4f/must-gather-wk9zz" Dec 02 15:22:15 crc kubenswrapper[4814]: I1202 15:22:15.197914 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t2x4\" (UniqueName: \"kubernetes.io/projected/e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0-kube-api-access-5t2x4\") pod \"must-gather-wk9zz\" (UID: \"e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0\") " pod="openshift-must-gather-jkh4f/must-gather-wk9zz" Dec 02 15:22:15 crc kubenswrapper[4814]: I1202 15:22:15.200662 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkh4f/must-gather-wk9zz" Dec 02 15:22:15 crc kubenswrapper[4814]: W1202 15:22:15.701328 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode048f1f5_1d6e_47fd_8a76_ba3b7743e9b0.slice/crio-bd242344fc28e24dcc1c00c0cfede5078571fd63cb6a8daa29a82a5f287082e5 WatchSource:0}: Error finding container bd242344fc28e24dcc1c00c0cfede5078571fd63cb6a8daa29a82a5f287082e5: Status 404 returned error can't find the container with id bd242344fc28e24dcc1c00c0cfede5078571fd63cb6a8daa29a82a5f287082e5 Dec 02 15:22:15 crc kubenswrapper[4814]: I1202 15:22:15.702166 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jkh4f/must-gather-wk9zz"] Dec 02 15:22:15 crc kubenswrapper[4814]: I1202 15:22:15.703507 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:22:16 crc kubenswrapper[4814]: I1202 15:22:16.510635 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkh4f/must-gather-wk9zz" event={"ID":"e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0","Type":"ContainerStarted","Data":"bd242344fc28e24dcc1c00c0cfede5078571fd63cb6a8daa29a82a5f287082e5"} Dec 02 15:22:20 crc kubenswrapper[4814]: I1202 15:22:20.151560 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:22:20 crc kubenswrapper[4814]: E1202 15:22:20.152040 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:22:20 crc kubenswrapper[4814]: I1202 15:22:20.561888 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkh4f/must-gather-wk9zz" event={"ID":"e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0","Type":"ContainerStarted","Data":"4761e84f91fe77be0a5e1d405cabbfcab55ff2497f6e47f69804996aaee12362"} Dec 02 15:22:21 crc kubenswrapper[4814]: I1202 15:22:21.145442 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:22:21 crc kubenswrapper[4814]: E1202 15:22:21.145676 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:22:21 crc kubenswrapper[4814]: I1202 15:22:21.572282 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkh4f/must-gather-wk9zz" event={"ID":"e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0","Type":"ContainerStarted","Data":"775dd04927be1adbadef93b76a25846e6a043fefbe79685f0c071b33654e5391"} Dec 02 15:22:21 crc kubenswrapper[4814]: I1202 15:22:21.593998 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jkh4f/must-gather-wk9zz" podStartSLOduration=3.306322762 podStartE2EDuration="7.593982738s" podCreationTimestamp="2025-12-02 15:22:14 +0000 UTC" firstStartedPulling="2025-12-02 15:22:15.703479334 +0000 UTC m=+3197.827841538" lastFinishedPulling="2025-12-02 15:22:19.99113931 +0000 UTC m=+3202.115501514" observedRunningTime="2025-12-02 15:22:21.588585729 +0000 UTC m=+3203.712947933" watchObservedRunningTime="2025-12-02 15:22:21.593982738 +0000 UTC m=+3203.718344942" Dec 02 15:22:32 crc kubenswrapper[4814]: I1202 15:22:32.144300 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:22:32 crc kubenswrapper[4814]: E1202 15:22:32.145050 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:22:35 crc kubenswrapper[4814]: I1202 15:22:35.145486 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:22:35 crc kubenswrapper[4814]: E1202 15:22:35.145993 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:22:47 crc kubenswrapper[4814]: I1202 15:22:47.144647 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:22:47 crc kubenswrapper[4814]: I1202 15:22:47.145407 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:22:47 crc kubenswrapper[4814]: E1202 15:22:47.145591 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:22:47 crc kubenswrapper[4814]: I1202 15:22:47.784207 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerStarted","Data":"37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9"} Dec 02 15:22:48 crc kubenswrapper[4814]: I1202 15:22:48.557779 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:22:48 crc kubenswrapper[4814]: I1202 15:22:48.582136 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:22:48 crc kubenswrapper[4814]: I1202 15:22:48.792378 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:22:48 crc kubenswrapper[4814]: I1202 15:22:48.819220 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:22:50 crc kubenswrapper[4814]: I1202 15:22:50.809719 4814 generic.go:334] "Generic (PLEG): container finished" podID="9b519b57-d380-444e-863b-b42e4d4eb06c" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" exitCode=1 Dec 02 15:22:50 crc kubenswrapper[4814]: I1202 15:22:50.809806 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerDied","Data":"37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9"} Dec 02 15:22:50 crc kubenswrapper[4814]: I1202 15:22:50.810072 4814 scope.go:117] "RemoveContainer" containerID="1467409b9ad2cd5cc03849f28148c7958fa759216d16b5b2874bb7f9d4eeb0cc" Dec 02 15:22:50 crc kubenswrapper[4814]: I1202 15:22:50.810318 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:22:50 crc kubenswrapper[4814]: E1202 15:22:50.810545 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:22:51 crc kubenswrapper[4814]: I1202 15:22:51.819595 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:22:51 crc kubenswrapper[4814]: E1202 15:22:51.820005 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:22:58 crc kubenswrapper[4814]: I1202 15:22:58.557539 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:22:58 crc kubenswrapper[4814]: I1202 15:22:58.558109 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:22:58 crc kubenswrapper[4814]: I1202 15:22:58.558127 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:22:58 crc kubenswrapper[4814]: I1202 15:22:58.558910 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:22:58 crc kubenswrapper[4814]: E1202 15:22:58.559189 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:23:01 crc kubenswrapper[4814]: I1202 15:23:01.144633 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:23:01 crc kubenswrapper[4814]: E1202 15:23:01.145135 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:23:01 crc kubenswrapper[4814]: I1202 15:23:01.583284 4814 scope.go:117] "RemoveContainer" containerID="ee054742134987329b47688b18bcb8eec0ccbb6ff7ad683bc16e1571388ef0c2" Dec 02 15:23:14 crc kubenswrapper[4814]: I1202 15:23:14.145180 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:23:14 crc kubenswrapper[4814]: I1202 15:23:14.145844 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:23:14 crc kubenswrapper[4814]: E1202 15:23:14.145979 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:23:14 crc kubenswrapper[4814]: E1202 15:23:14.146083 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:23:20 crc kubenswrapper[4814]: I1202 15:23:20.347712 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tf8jl"] Dec 02 15:23:20 crc kubenswrapper[4814]: I1202 15:23:20.350314 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:20 crc kubenswrapper[4814]: I1202 15:23:20.353086 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a-catalog-content\") pod \"redhat-operators-tf8jl\" (UID: \"53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a\") " pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:20 crc kubenswrapper[4814]: I1202 15:23:20.353189 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9x74\" (UniqueName: \"kubernetes.io/projected/53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a-kube-api-access-w9x74\") pod \"redhat-operators-tf8jl\" (UID: \"53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a\") " pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:20 crc kubenswrapper[4814]: I1202 15:23:20.353251 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a-utilities\") pod \"redhat-operators-tf8jl\" (UID: \"53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a\") " pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:20 crc kubenswrapper[4814]: I1202 15:23:20.366003 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tf8jl"] Dec 02 15:23:20 crc kubenswrapper[4814]: I1202 15:23:20.454420 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a-catalog-content\") pod \"redhat-operators-tf8jl\" (UID: \"53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a\") " pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:20 crc kubenswrapper[4814]: I1202 15:23:20.454474 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9x74\" (UniqueName: \"kubernetes.io/projected/53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a-kube-api-access-w9x74\") pod \"redhat-operators-tf8jl\" (UID: \"53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a\") " pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:20 crc kubenswrapper[4814]: I1202 15:23:20.454496 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a-utilities\") pod \"redhat-operators-tf8jl\" (UID: \"53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a\") " pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:20 crc kubenswrapper[4814]: I1202 15:23:20.455036 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a-catalog-content\") pod \"redhat-operators-tf8jl\" (UID: \"53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a\") " pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:20 crc kubenswrapper[4814]: I1202 15:23:20.455069 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a-utilities\") pod \"redhat-operators-tf8jl\" (UID: \"53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a\") " pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:20 crc kubenswrapper[4814]: I1202 15:23:20.478008 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9x74\" (UniqueName: \"kubernetes.io/projected/53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a-kube-api-access-w9x74\") pod \"redhat-operators-tf8jl\" (UID: \"53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a\") " pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:20 crc kubenswrapper[4814]: I1202 15:23:20.674473 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:21 crc kubenswrapper[4814]: I1202 15:23:21.123833 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tf8jl"] Dec 02 15:23:21 crc kubenswrapper[4814]: W1202 15:23:21.136614 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53a60a44_e1ce_42c4_9ad6_fcf7ec5a4e3a.slice/crio-de766fd0439d13192792000c442c4c6b5bbe0485b071d7912b478334fc2a2a7e WatchSource:0}: Error finding container de766fd0439d13192792000c442c4c6b5bbe0485b071d7912b478334fc2a2a7e: Status 404 returned error can't find the container with id de766fd0439d13192792000c442c4c6b5bbe0485b071d7912b478334fc2a2a7e Dec 02 15:23:22 crc kubenswrapper[4814]: I1202 15:23:22.065766 4814 generic.go:334] "Generic (PLEG): container finished" podID="53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a" containerID="f30523721cfe1336a2714054c80539fc2dd59ebd9286056608c1739240c8c0de" exitCode=0 Dec 02 15:23:22 crc kubenswrapper[4814]: I1202 15:23:22.065971 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tf8jl" event={"ID":"53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a","Type":"ContainerDied","Data":"f30523721cfe1336a2714054c80539fc2dd59ebd9286056608c1739240c8c0de"} Dec 02 15:23:22 crc kubenswrapper[4814]: I1202 15:23:22.066390 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tf8jl" event={"ID":"53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a","Type":"ContainerStarted","Data":"de766fd0439d13192792000c442c4c6b5bbe0485b071d7912b478334fc2a2a7e"} Dec 02 15:23:25 crc kubenswrapper[4814]: I1202 15:23:25.144955 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:23:25 crc kubenswrapper[4814]: I1202 15:23:25.145496 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:23:25 crc kubenswrapper[4814]: E1202 15:23:25.145643 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:23:25 crc kubenswrapper[4814]: E1202 15:23:25.145734 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:23:26 crc kubenswrapper[4814]: I1202 15:23:26.616718 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c_78b31a07-98de-43ef-994f-f7781fac719f/util/0.log" Dec 02 15:23:26 crc kubenswrapper[4814]: I1202 15:23:26.798515 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c_78b31a07-98de-43ef-994f-f7781fac719f/pull/0.log" Dec 02 15:23:26 crc kubenswrapper[4814]: I1202 15:23:26.805325 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c_78b31a07-98de-43ef-994f-f7781fac719f/pull/0.log" Dec 02 15:23:26 crc kubenswrapper[4814]: I1202 15:23:26.805714 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c_78b31a07-98de-43ef-994f-f7781fac719f/util/0.log" Dec 02 15:23:27 crc kubenswrapper[4814]: I1202 15:23:27.038452 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c_78b31a07-98de-43ef-994f-f7781fac719f/pull/0.log" Dec 02 15:23:27 crc kubenswrapper[4814]: I1202 15:23:27.058480 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c_78b31a07-98de-43ef-994f-f7781fac719f/util/0.log" Dec 02 15:23:27 crc kubenswrapper[4814]: I1202 15:23:27.116002 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97af3a9d6e7846261860a25410a6b8962d43e6e2f0767ddd8da1f4c341sbt7c_78b31a07-98de-43ef-994f-f7781fac719f/extract/0.log" Dec 02 15:23:27 crc kubenswrapper[4814]: I1202 15:23:27.233140 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-hkkjn_d9fc1bb3-efc9-4832-a2d6-b60b3565fb54/kube-rbac-proxy/0.log" Dec 02 15:23:27 crc kubenswrapper[4814]: I1202 15:23:27.302434 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-hkkjn_d9fc1bb3-efc9-4832-a2d6-b60b3565fb54/manager/0.log" Dec 02 15:23:27 crc kubenswrapper[4814]: I1202 15:23:27.379626 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-d268c_0848a668-6f55-429e-951d-ef285f20a1b6/kube-rbac-proxy/0.log" Dec 02 15:23:27 crc kubenswrapper[4814]: I1202 15:23:27.506625 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-d268c_0848a668-6f55-429e-951d-ef285f20a1b6/manager/0.log" Dec 02 15:23:27 crc kubenswrapper[4814]: I1202 15:23:27.696491 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-57bd6_a2a6620c-b75b-4223-ac5c-721a57d69a06/kube-rbac-proxy/0.log" Dec 02 15:23:27 crc kubenswrapper[4814]: I1202 15:23:27.839667 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-57bd6_a2a6620c-b75b-4223-ac5c-721a57d69a06/manager/0.log" Dec 02 15:23:27 crc kubenswrapper[4814]: I1202 15:23:27.949589 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl_9ce42437-6d96-4856-ac49-711a0959ba93/util/0.log" Dec 02 15:23:28 crc kubenswrapper[4814]: I1202 15:23:28.159546 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl_9ce42437-6d96-4856-ac49-711a0959ba93/pull/0.log" Dec 02 15:23:28 crc kubenswrapper[4814]: I1202 15:23:28.167783 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl_9ce42437-6d96-4856-ac49-711a0959ba93/util/0.log" Dec 02 15:23:28 crc kubenswrapper[4814]: I1202 15:23:28.195534 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl_9ce42437-6d96-4856-ac49-711a0959ba93/pull/0.log" Dec 02 15:23:28 crc kubenswrapper[4814]: I1202 15:23:28.375383 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl_9ce42437-6d96-4856-ac49-711a0959ba93/pull/0.log" Dec 02 15:23:28 crc kubenswrapper[4814]: I1202 15:23:28.394735 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl_9ce42437-6d96-4856-ac49-711a0959ba93/extract/0.log" Dec 02 15:23:28 crc kubenswrapper[4814]: I1202 15:23:28.418048 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f0eeb8dbdc423ea8bc2d2093518452e7fe05e18b34c599e9d1d70a52f0sw2vl_9ce42437-6d96-4856-ac49-711a0959ba93/util/0.log" Dec 02 15:23:28 crc kubenswrapper[4814]: I1202 15:23:28.581106 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-7ttvw_2bd51779-c0ec-438d-ad69-a737752b9b90/kube-rbac-proxy/0.log" Dec 02 15:23:28 crc kubenswrapper[4814]: I1202 15:23:28.664181 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-7ttvw_2bd51779-c0ec-438d-ad69-a737752b9b90/manager/0.log" Dec 02 15:23:28 crc kubenswrapper[4814]: I1202 15:23:28.738201 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-knqvz_a72143d5-40c2-4fbe-87de-c3be61143c9e/kube-rbac-proxy/0.log" Dec 02 15:23:28 crc kubenswrapper[4814]: I1202 15:23:28.838891 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-knqvz_a72143d5-40c2-4fbe-87de-c3be61143c9e/manager/0.log" Dec 02 15:23:28 crc kubenswrapper[4814]: I1202 15:23:28.905151 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-z92q9_37189c89-1420-498e-bcb9-d42e0cd65ba9/kube-rbac-proxy/0.log" Dec 02 15:23:28 crc kubenswrapper[4814]: I1202 15:23:28.997082 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-z92q9_37189c89-1420-498e-bcb9-d42e0cd65ba9/manager/0.log" Dec 02 15:23:29 crc kubenswrapper[4814]: I1202 15:23:29.194746 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-h9gcz_2f89b1a9-fb43-4ef5-a8f7-9a15b219022c/kube-rbac-proxy/0.log" Dec 02 15:23:29 crc kubenswrapper[4814]: I1202 15:23:29.380370 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-rgnvj_41513450-3327-470f-b978-b5fa13c72991/kube-rbac-proxy/0.log" Dec 02 15:23:29 crc kubenswrapper[4814]: I1202 15:23:29.423604 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-h9gcz_2f89b1a9-fb43-4ef5-a8f7-9a15b219022c/manager/0.log" Dec 02 15:23:29 crc kubenswrapper[4814]: I1202 15:23:29.476651 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-rgnvj_41513450-3327-470f-b978-b5fa13c72991/manager/0.log" Dec 02 15:23:29 crc kubenswrapper[4814]: I1202 15:23:29.595016 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-sclz2_ce263a59-b81e-4394-9dbd-fe091bc7075f/kube-rbac-proxy/0.log" Dec 02 15:23:29 crc kubenswrapper[4814]: I1202 15:23:29.731357 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-sclz2_ce263a59-b81e-4394-9dbd-fe091bc7075f/manager/0.log" Dec 02 15:23:29 crc kubenswrapper[4814]: I1202 15:23:29.740726 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-zbs77_5d626448-a1ab-4d7a-bc09-d0777abad82d/kube-rbac-proxy/0.log" Dec 02 15:23:29 crc kubenswrapper[4814]: I1202 15:23:29.884678 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-zbs77_5d626448-a1ab-4d7a-bc09-d0777abad82d/manager/0.log" Dec 02 15:23:30 crc kubenswrapper[4814]: I1202 15:23:30.047799 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-q7sbm_b362d2bd-46b7-478b-af11-125a0d2fd5b0/kube-rbac-proxy/0.log" Dec 02 15:23:30 crc kubenswrapper[4814]: I1202 15:23:30.129951 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-q7sbm_b362d2bd-46b7-478b-af11-125a0d2fd5b0/manager/0.log" Dec 02 15:23:30 crc kubenswrapper[4814]: I1202 15:23:30.249638 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-9nxkr_f36f9c57-13d7-48e0-b519-89e87670e082/kube-rbac-proxy/0.log" Dec 02 15:23:30 crc kubenswrapper[4814]: I1202 15:23:30.259735 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-9nxkr_f36f9c57-13d7-48e0-b519-89e87670e082/manager/0.log" Dec 02 15:23:30 crc kubenswrapper[4814]: I1202 15:23:30.475631 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-ngzqv_88f105c6-4510-46e0-8be8-dc360a9b73ce/kube-rbac-proxy/0.log" Dec 02 15:23:30 crc kubenswrapper[4814]: I1202 15:23:30.512935 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-ngzqv_88f105c6-4510-46e0-8be8-dc360a9b73ce/manager/0.log" Dec 02 15:23:30 crc kubenswrapper[4814]: I1202 15:23:30.564277 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-l5bdj_9c15f550-1056-421a-aee0-301d768590c6/manager/0.log" Dec 02 15:23:30 crc kubenswrapper[4814]: I1202 15:23:30.564416 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-l5bdj_9c15f550-1056-421a-aee0-301d768590c6/kube-rbac-proxy/0.log" Dec 02 15:23:30 crc kubenswrapper[4814]: I1202 15:23:30.699672 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld_491246a7-1bdb-4efc-99a8-16ab9502d44d/kube-rbac-proxy/0.log" Dec 02 15:23:30 crc kubenswrapper[4814]: I1202 15:23:30.786266 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4mn7ld_491246a7-1bdb-4efc-99a8-16ab9502d44d/manager/0.log" Dec 02 15:23:31 crc kubenswrapper[4814]: I1202 15:23:31.055200 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-59c6849d55-jttnz_1d9fb058-e400-4c7c-af16-093ca5a5f1bc/manager/0.log" Dec 02 15:23:31 crc kubenswrapper[4814]: I1202 15:23:31.233602 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-vbb99_6ff033a3-1047-4ee3-904b-1e6845244e16/registry-server/0.log" Dec 02 15:23:31 crc kubenswrapper[4814]: I1202 15:23:31.370322 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-w6xr2_94a7cab9-3ba9-49c1-b468-b42424e3c43c/manager/0.log" Dec 02 15:23:31 crc kubenswrapper[4814]: I1202 15:23:31.422439 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-w6xr2_94a7cab9-3ba9-49c1-b468-b42424e3c43c/kube-rbac-proxy/0.log" Dec 02 15:23:31 crc kubenswrapper[4814]: I1202 15:23:31.542711 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-kxkhr_8e50493d-4c39-4cb7-b752-714c0520e559/kube-rbac-proxy/0.log" Dec 02 15:23:31 crc kubenswrapper[4814]: I1202 15:23:31.597782 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-kxkhr_8e50493d-4c39-4cb7-b752-714c0520e559/manager/0.log" Dec 02 15:23:31 crc kubenswrapper[4814]: I1202 15:23:31.692514 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-57776_c288b87f-0e35-4b3f-b0f5-1565425396ba/operator/0.log" Dec 02 15:23:31 crc kubenswrapper[4814]: I1202 15:23:31.784034 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-fj2tl_ac66ba40-6166-47b9-9c47-f0f0637e774c/manager/0.log" Dec 02 15:23:31 crc kubenswrapper[4814]: I1202 15:23:31.819758 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-fj2tl_ac66ba40-6166-47b9-9c47-f0f0637e774c/kube-rbac-proxy/0.log" Dec 02 15:23:31 crc kubenswrapper[4814]: I1202 15:23:31.915409 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-mrbvm_55d8049a-e6f9-4ace-ada2-8fc96a06f1e4/kube-rbac-proxy/0.log" Dec 02 15:23:32 crc kubenswrapper[4814]: I1202 15:23:32.076712 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-mrbvm_55d8049a-e6f9-4ace-ada2-8fc96a06f1e4/manager/0.log" Dec 02 15:23:32 crc kubenswrapper[4814]: I1202 15:23:32.080989 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-cskhk_391bcd0f-854f-43c3-ac40-a917e68bf3b8/kube-rbac-proxy/0.log" Dec 02 15:23:32 crc kubenswrapper[4814]: I1202 15:23:32.126315 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-cskhk_391bcd0f-854f-43c3-ac40-a917e68bf3b8/manager/0.log" Dec 02 15:23:32 crc kubenswrapper[4814]: I1202 15:23:32.283652 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-zst84_00b8428a-ae08-4c70-8eb7-566090aa6716/registry-server/0.log" Dec 02 15:23:32 crc kubenswrapper[4814]: I1202 15:23:32.449655 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-987c4dd49-4xbpd_0cabeccf-6592-4e14-b15d-bc876d5061f4/manager/0.log" Dec 02 15:23:34 crc kubenswrapper[4814]: I1202 15:23:34.156452 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tf8jl" event={"ID":"53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a","Type":"ContainerStarted","Data":"7b6503081cc313137e9a566cae8bf4532d0eced125bb764ec20c68cde1badd5f"} Dec 02 15:23:37 crc kubenswrapper[4814]: I1202 15:23:37.176111 4814 generic.go:334] "Generic (PLEG): container finished" podID="53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a" containerID="7b6503081cc313137e9a566cae8bf4532d0eced125bb764ec20c68cde1badd5f" exitCode=0 Dec 02 15:23:37 crc kubenswrapper[4814]: I1202 15:23:37.176180 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tf8jl" event={"ID":"53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a","Type":"ContainerDied","Data":"7b6503081cc313137e9a566cae8bf4532d0eced125bb764ec20c68cde1badd5f"} Dec 02 15:23:38 crc kubenswrapper[4814]: I1202 15:23:38.148885 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:23:38 crc kubenswrapper[4814]: E1202 15:23:38.149209 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:23:39 crc kubenswrapper[4814]: I1202 15:23:39.195443 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tf8jl" event={"ID":"53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a","Type":"ContainerStarted","Data":"73cd7818fd335cfe97bfe9f9ba4fc323c95f0fc8c341461e4adffc9d6f84248b"} Dec 02 15:23:39 crc kubenswrapper[4814]: I1202 15:23:39.224000 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tf8jl" podStartSLOduration=2.275763516 podStartE2EDuration="19.223984949s" podCreationTimestamp="2025-12-02 15:23:20 +0000 UTC" firstStartedPulling="2025-12-02 15:23:22.068495409 +0000 UTC m=+3264.192857613" lastFinishedPulling="2025-12-02 15:23:39.016716842 +0000 UTC m=+3281.141079046" observedRunningTime="2025-12-02 15:23:39.219264066 +0000 UTC m=+3281.343626270" watchObservedRunningTime="2025-12-02 15:23:39.223984949 +0000 UTC m=+3281.348347153" Dec 02 15:23:40 crc kubenswrapper[4814]: I1202 15:23:40.144923 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:23:40 crc kubenswrapper[4814]: E1202 15:23:40.145334 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:23:40 crc kubenswrapper[4814]: I1202 15:23:40.675434 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:40 crc kubenswrapper[4814]: I1202 15:23:40.675481 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:41 crc kubenswrapper[4814]: I1202 15:23:41.721529 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tf8jl" podUID="53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a" containerName="registry-server" probeResult="failure" output=< Dec 02 15:23:41 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Dec 02 15:23:41 crc kubenswrapper[4814]: > Dec 02 15:23:49 crc kubenswrapper[4814]: I1202 15:23:49.144800 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:23:49 crc kubenswrapper[4814]: E1202 15:23:49.145462 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:23:50 crc kubenswrapper[4814]: I1202 15:23:50.740809 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:50 crc kubenswrapper[4814]: I1202 15:23:50.819042 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tf8jl" Dec 02 15:23:53 crc kubenswrapper[4814]: I1202 15:23:53.144616 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:23:53 crc kubenswrapper[4814]: E1202 15:23:53.145730 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:23:53 crc kubenswrapper[4814]: I1202 15:23:53.368431 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tf8jl"] Dec 02 15:23:53 crc kubenswrapper[4814]: I1202 15:23:53.941436 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bntmr"] Dec 02 15:23:53 crc kubenswrapper[4814]: I1202 15:23:53.941723 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bntmr" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" containerName="registry-server" containerID="cri-o://345a116abf31fd1325751478b1655265ffc1a599a3eca76578a63078260ec272" gracePeriod=2 Dec 02 15:23:54 crc kubenswrapper[4814]: I1202 15:23:54.340518 4814 generic.go:334] "Generic (PLEG): container finished" podID="6d3328f3-af2c-4579-929d-f9857cbccdb9" containerID="345a116abf31fd1325751478b1655265ffc1a599a3eca76578a63078260ec272" exitCode=0 Dec 02 15:23:54 crc kubenswrapper[4814]: I1202 15:23:54.340782 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bntmr" event={"ID":"6d3328f3-af2c-4579-929d-f9857cbccdb9","Type":"ContainerDied","Data":"345a116abf31fd1325751478b1655265ffc1a599a3eca76578a63078260ec272"} Dec 02 15:23:54 crc kubenswrapper[4814]: I1202 15:23:54.417805 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 15:23:54 crc kubenswrapper[4814]: I1202 15:23:54.541788 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d3328f3-af2c-4579-929d-f9857cbccdb9-catalog-content\") pod \"6d3328f3-af2c-4579-929d-f9857cbccdb9\" (UID: \"6d3328f3-af2c-4579-929d-f9857cbccdb9\") " Dec 02 15:23:54 crc kubenswrapper[4814]: I1202 15:23:54.541930 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbp7h\" (UniqueName: \"kubernetes.io/projected/6d3328f3-af2c-4579-929d-f9857cbccdb9-kube-api-access-xbp7h\") pod \"6d3328f3-af2c-4579-929d-f9857cbccdb9\" (UID: \"6d3328f3-af2c-4579-929d-f9857cbccdb9\") " Dec 02 15:23:54 crc kubenswrapper[4814]: I1202 15:23:54.542648 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d3328f3-af2c-4579-929d-f9857cbccdb9-utilities\") pod \"6d3328f3-af2c-4579-929d-f9857cbccdb9\" (UID: \"6d3328f3-af2c-4579-929d-f9857cbccdb9\") " Dec 02 15:23:54 crc kubenswrapper[4814]: I1202 15:23:54.543225 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d3328f3-af2c-4579-929d-f9857cbccdb9-utilities" (OuterVolumeSpecName: "utilities") pod "6d3328f3-af2c-4579-929d-f9857cbccdb9" (UID: "6d3328f3-af2c-4579-929d-f9857cbccdb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:23:54 crc kubenswrapper[4814]: I1202 15:23:54.548384 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3328f3-af2c-4579-929d-f9857cbccdb9-kube-api-access-xbp7h" (OuterVolumeSpecName: "kube-api-access-xbp7h") pod "6d3328f3-af2c-4579-929d-f9857cbccdb9" (UID: "6d3328f3-af2c-4579-929d-f9857cbccdb9"). InnerVolumeSpecName "kube-api-access-xbp7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:23:54 crc kubenswrapper[4814]: I1202 15:23:54.644606 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbp7h\" (UniqueName: \"kubernetes.io/projected/6d3328f3-af2c-4579-929d-f9857cbccdb9-kube-api-access-xbp7h\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:54 crc kubenswrapper[4814]: I1202 15:23:54.644648 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d3328f3-af2c-4579-929d-f9857cbccdb9-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:54 crc kubenswrapper[4814]: I1202 15:23:54.649789 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d3328f3-af2c-4579-929d-f9857cbccdb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d3328f3-af2c-4579-929d-f9857cbccdb9" (UID: "6d3328f3-af2c-4579-929d-f9857cbccdb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:23:54 crc kubenswrapper[4814]: I1202 15:23:54.745845 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d3328f3-af2c-4579-929d-f9857cbccdb9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:55 crc kubenswrapper[4814]: I1202 15:23:55.350878 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bntmr" event={"ID":"6d3328f3-af2c-4579-929d-f9857cbccdb9","Type":"ContainerDied","Data":"c78710cc46c8e11b53c82ab4c318ab0e2df89b23493222d055d9170abf583625"} Dec 02 15:23:55 crc kubenswrapper[4814]: I1202 15:23:55.350932 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bntmr" Dec 02 15:23:55 crc kubenswrapper[4814]: I1202 15:23:55.351209 4814 scope.go:117] "RemoveContainer" containerID="345a116abf31fd1325751478b1655265ffc1a599a3eca76578a63078260ec272" Dec 02 15:23:55 crc kubenswrapper[4814]: I1202 15:23:55.379555 4814 scope.go:117] "RemoveContainer" containerID="e48545731bdee994c5cba1ffefa6b52ccd9f762a49439ac2db218da6f4727255" Dec 02 15:23:55 crc kubenswrapper[4814]: I1202 15:23:55.389950 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bntmr"] Dec 02 15:23:55 crc kubenswrapper[4814]: I1202 15:23:55.403383 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bntmr"] Dec 02 15:23:55 crc kubenswrapper[4814]: I1202 15:23:55.413888 4814 scope.go:117] "RemoveContainer" containerID="9565bc818e08ca355ac8a8a9037fbd6be17f1dd0bc21c4dd9db83f6e9dfb0908" Dec 02 15:23:56 crc kubenswrapper[4814]: I1202 15:23:56.153451 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" path="/var/lib/kubelet/pods/6d3328f3-af2c-4579-929d-f9857cbccdb9/volumes" Dec 02 15:23:56 crc kubenswrapper[4814]: I1202 15:23:56.491931 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zqrmh_4811cc2a-adc3-4fb7-8f09-d9426a60fffc/control-plane-machine-set-operator/0.log" Dec 02 15:23:56 crc kubenswrapper[4814]: I1202 15:23:56.672475 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-k65hc_25faedf3-ec5c-42cb-804c-15fdfc7ac152/kube-rbac-proxy/0.log" Dec 02 15:23:56 crc kubenswrapper[4814]: I1202 15:23:56.754829 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-k65hc_25faedf3-ec5c-42cb-804c-15fdfc7ac152/machine-api-operator/0.log" Dec 02 15:24:01 crc kubenswrapper[4814]: I1202 15:24:01.145175 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:24:01 crc kubenswrapper[4814]: E1202 15:24:01.145964 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:24:06 crc kubenswrapper[4814]: I1202 15:24:06.146124 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:24:06 crc kubenswrapper[4814]: E1202 15:24:06.146914 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:24:11 crc kubenswrapper[4814]: I1202 15:24:11.413204 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-gss2c_7953c8b5-72f8-4277-b481-eb5d59a88de4/cert-manager-controller/0.log" Dec 02 15:24:11 crc kubenswrapper[4814]: I1202 15:24:11.586044 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-hkrtx_77039ccf-9957-4593-bba5-4c1c89bcf793/cert-manager-cainjector/0.log" Dec 02 15:24:11 crc kubenswrapper[4814]: I1202 15:24:11.635806 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-xsb4g_0603f2e1-9323-40a6-abf1-8980f21a707d/cert-manager-webhook/0.log" Dec 02 15:24:12 crc kubenswrapper[4814]: I1202 15:24:12.144568 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:24:12 crc kubenswrapper[4814]: E1202 15:24:12.144865 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:24:20 crc kubenswrapper[4814]: I1202 15:24:20.144216 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:24:20 crc kubenswrapper[4814]: E1202 15:24:20.145263 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:24:26 crc kubenswrapper[4814]: I1202 15:24:26.145050 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:24:26 crc kubenswrapper[4814]: I1202 15:24:26.603637 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"33489b4ca9d6b6c45a94929ba3bec4490e82e2be7881895a3ad5083cdb0471ac"} Dec 02 15:24:26 crc kubenswrapper[4814]: I1202 15:24:26.722299 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-d9txb_73a87257-56e2-4465-ba71-524ec1614ef9/nmstate-console-plugin/0.log" Dec 02 15:24:26 crc kubenswrapper[4814]: I1202 15:24:26.947982 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8vx6h_5965df3a-80a1-4c60-83ec-e1d93b61a489/nmstate-handler/0.log" Dec 02 15:24:27 crc kubenswrapper[4814]: I1202 15:24:27.142533 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-q6vmr_1c57a5d5-c718-404d-a1a4-0ae448d422aa/kube-rbac-proxy/0.log" Dec 02 15:24:27 crc kubenswrapper[4814]: I1202 15:24:27.178754 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-q6vmr_1c57a5d5-c718-404d-a1a4-0ae448d422aa/nmstate-metrics/0.log" Dec 02 15:24:27 crc kubenswrapper[4814]: I1202 15:24:27.332096 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-dbljk_eabb27e3-5307-4104-816f-fcfc906f2c73/nmstate-operator/0.log" Dec 02 15:24:27 crc kubenswrapper[4814]: I1202 15:24:27.396405 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-nxjcc_448d2f06-663c-4836-8848-52113dbc856f/nmstate-webhook/0.log" Dec 02 15:24:33 crc kubenswrapper[4814]: I1202 15:24:33.144870 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:24:33 crc kubenswrapper[4814]: E1202 15:24:33.145454 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:24:44 crc kubenswrapper[4814]: I1202 15:24:44.522405 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-4vgbp_38d43829-f3bd-4d95-9a6f-5e89f3f0f30a/kube-rbac-proxy/0.log" Dec 02 15:24:44 crc kubenswrapper[4814]: I1202 15:24:44.591606 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-4vgbp_38d43829-f3bd-4d95-9a6f-5e89f3f0f30a/controller/0.log" Dec 02 15:24:44 crc kubenswrapper[4814]: I1202 15:24:44.650133 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/cp-frr-files/0.log" Dec 02 15:24:44 crc kubenswrapper[4814]: I1202 15:24:44.855753 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/cp-frr-files/0.log" Dec 02 15:24:44 crc kubenswrapper[4814]: I1202 15:24:44.859383 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/cp-reloader/0.log" Dec 02 15:24:44 crc kubenswrapper[4814]: I1202 15:24:44.874538 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/cp-metrics/0.log" Dec 02 15:24:44 crc kubenswrapper[4814]: I1202 15:24:44.917297 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/cp-reloader/0.log" Dec 02 15:24:45 crc kubenswrapper[4814]: I1202 15:24:45.095311 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/cp-frr-files/0.log" Dec 02 15:24:45 crc kubenswrapper[4814]: I1202 15:24:45.165027 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/cp-metrics/0.log" Dec 02 15:24:45 crc kubenswrapper[4814]: I1202 15:24:45.169962 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/cp-reloader/0.log" Dec 02 15:24:45 crc kubenswrapper[4814]: I1202 15:24:45.171421 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/cp-metrics/0.log" Dec 02 15:24:45 crc kubenswrapper[4814]: I1202 15:24:45.367741 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/cp-frr-files/0.log" Dec 02 15:24:45 crc kubenswrapper[4814]: I1202 15:24:45.431857 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/cp-metrics/0.log" Dec 02 15:24:45 crc kubenswrapper[4814]: I1202 15:24:45.435103 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/cp-reloader/0.log" Dec 02 15:24:45 crc kubenswrapper[4814]: I1202 15:24:45.458410 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/controller/0.log" Dec 02 15:24:45 crc kubenswrapper[4814]: I1202 15:24:45.633164 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/kube-rbac-proxy/0.log" Dec 02 15:24:45 crc kubenswrapper[4814]: I1202 15:24:45.648411 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/frr-metrics/0.log" Dec 02 15:24:46 crc kubenswrapper[4814]: I1202 15:24:46.019149 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/kube-rbac-proxy-frr/0.log" Dec 02 15:24:46 crc kubenswrapper[4814]: I1202 15:24:46.171759 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/reloader/0.log" Dec 02 15:24:46 crc kubenswrapper[4814]: I1202 15:24:46.366366 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-z95mn_8a69012b-d3a3-47ba-847f-5c171ca07026/frr-k8s-webhook-server/0.log" Dec 02 15:24:46 crc kubenswrapper[4814]: I1202 15:24:46.370507 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h5cxr_88b9b5e7-b351-4789-9739-8ed7fbe7269b/frr/0.log" Dec 02 15:24:46 crc kubenswrapper[4814]: I1202 15:24:46.515782 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6b695cd4c4-skhtm_4a489ded-6343-4d09-a0a9-e61cc452e979/manager/0.log" Dec 02 15:24:46 crc kubenswrapper[4814]: I1202 15:24:46.549701 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-866879c6dc-gggxz_9eabc75b-76b2-4d40-9055-b0b7906344a6/webhook-server/0.log" Dec 02 15:24:46 crc kubenswrapper[4814]: I1202 15:24:46.686021 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8g52s_7ffa654b-5a5c-44c2-9b37-85e00c8de913/kube-rbac-proxy/0.log" Dec 02 15:24:46 crc kubenswrapper[4814]: I1202 15:24:46.916347 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8g52s_7ffa654b-5a5c-44c2-9b37-85e00c8de913/speaker/0.log" Dec 02 15:24:47 crc kubenswrapper[4814]: I1202 15:24:47.145249 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:24:47 crc kubenswrapper[4814]: E1202 15:24:47.145731 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:24:59 crc kubenswrapper[4814]: I1202 15:24:59.145635 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:24:59 crc kubenswrapper[4814]: E1202 15:24:59.146723 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:25:12 crc kubenswrapper[4814]: I1202 15:25:12.145115 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:25:12 crc kubenswrapper[4814]: E1202 15:25:12.146878 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:25:12 crc kubenswrapper[4814]: I1202 15:25:12.525980 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_5e2f4172-7a16-4b44-9a2d-796ea4969ae9/init-config-reloader/0.log" Dec 02 15:25:12 crc kubenswrapper[4814]: I1202 15:25:12.717510 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_5e2f4172-7a16-4b44-9a2d-796ea4969ae9/config-reloader/0.log" Dec 02 15:25:12 crc kubenswrapper[4814]: I1202 15:25:12.744634 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_5e2f4172-7a16-4b44-9a2d-796ea4969ae9/alertmanager/0.log" Dec 02 15:25:12 crc kubenswrapper[4814]: I1202 15:25:12.767266 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_5e2f4172-7a16-4b44-9a2d-796ea4969ae9/init-config-reloader/0.log" Dec 02 15:25:12 crc kubenswrapper[4814]: I1202 15:25:12.928259 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_be9358f4-97c2-48a1-9499-0a7eac196dcf/ceilometer-notification-agent/0.log" Dec 02 15:25:12 crc kubenswrapper[4814]: I1202 15:25:12.929734 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_be9358f4-97c2-48a1-9499-0a7eac196dcf/ceilometer-central-agent/0.log" Dec 02 15:25:12 crc kubenswrapper[4814]: I1202 15:25:12.963381 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_be9358f4-97c2-48a1-9499-0a7eac196dcf/sg-core/0.log" Dec 02 15:25:12 crc kubenswrapper[4814]: I1202 15:25:12.999575 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_be9358f4-97c2-48a1-9499-0a7eac196dcf/proxy-httpd/0.log" Dec 02 15:25:13 crc kubenswrapper[4814]: I1202 15:25:13.197280 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-cron-29411461-fpt5d_526fa3fe-0f34-49ac-b9d6-f580b208dae2/keystone-cron/0.log" Dec 02 15:25:13 crc kubenswrapper[4814]: I1202 15:25:13.276899 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-697b49fc9c-nwnxx_cd5c8f55-035f-4536-a343-31a0d7cde6c4/keystone-api/0.log" Dec 02 15:25:13 crc kubenswrapper[4814]: I1202 15:25:13.424833 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_kube-state-metrics-0_4ed28d4e-1e9a-49a3-a2d3-2b5e88411e2c/kube-state-metrics/0.log" Dec 02 15:25:13 crc kubenswrapper[4814]: I1202 15:25:13.635141 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_63005150-087b-49a1-8ac9-d9e8aef28cab/mysql-bootstrap/0.log" Dec 02 15:25:13 crc kubenswrapper[4814]: I1202 15:25:13.838413 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_63005150-087b-49a1-8ac9-d9e8aef28cab/mysql-bootstrap/0.log" Dec 02 15:25:13 crc kubenswrapper[4814]: I1202 15:25:13.865607 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_63005150-087b-49a1-8ac9-d9e8aef28cab/galera/0.log" Dec 02 15:25:14 crc kubenswrapper[4814]: I1202 15:25:14.103538 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstackclient_eab12c37-21cd-45ac-9cd9-4d39a91f9192/openstackclient/0.log" Dec 02 15:25:14 crc kubenswrapper[4814]: I1202 15:25:14.237460 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_f6a3a77e-84b6-4f45-9666-9b295adabe90/init-config-reloader/0.log" Dec 02 15:25:14 crc kubenswrapper[4814]: I1202 15:25:14.474455 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_f6a3a77e-84b6-4f45-9666-9b295adabe90/init-config-reloader/0.log" Dec 02 15:25:14 crc kubenswrapper[4814]: I1202 15:25:14.484460 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_f6a3a77e-84b6-4f45-9666-9b295adabe90/config-reloader/0.log" Dec 02 15:25:14 crc kubenswrapper[4814]: I1202 15:25:14.497157 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_f6a3a77e-84b6-4f45-9666-9b295adabe90/prometheus/0.log" Dec 02 15:25:14 crc kubenswrapper[4814]: I1202 15:25:14.774855 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_f6a3a77e-84b6-4f45-9666-9b295adabe90/thanos-sidecar/0.log" Dec 02 15:25:14 crc kubenswrapper[4814]: I1202 15:25:14.845055 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_46b61b10-de07-41cf-9fc0-03b5fc067b56/setup-container/0.log" Dec 02 15:25:15 crc kubenswrapper[4814]: I1202 15:25:15.130569 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_46b61b10-de07-41cf-9fc0-03b5fc067b56/setup-container/0.log" Dec 02 15:25:15 crc kubenswrapper[4814]: I1202 15:25:15.149133 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_46b61b10-de07-41cf-9fc0-03b5fc067b56/rabbitmq/0.log" Dec 02 15:25:15 crc kubenswrapper[4814]: I1202 15:25:15.356856 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_7504d82f-f8f4-4784-ae11-85688085e352/setup-container/0.log" Dec 02 15:25:15 crc kubenswrapper[4814]: I1202 15:25:15.697941 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_7504d82f-f8f4-4784-ae11-85688085e352/setup-container/0.log" Dec 02 15:25:15 crc kubenswrapper[4814]: I1202 15:25:15.770672 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_7504d82f-f8f4-4784-ae11-85688085e352/rabbitmq/0.log" Dec 02 15:25:15 crc kubenswrapper[4814]: I1202 15:25:15.854258 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-d647-account-create-update-n5j8w_001710ba-b668-41f4-a817-fcd45782dd05/mariadb-account-create-update/0.log" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.049307 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-db-create-xvxsj_d7fa1510-c400-479b-9471-f8c1514984c0/mariadb-database-create/0.log" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.156764 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dkvpk"] Dec 02 15:25:16 crc kubenswrapper[4814]: E1202 15:25:16.157029 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" containerName="extract-content" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.157046 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" containerName="extract-content" Dec 02 15:25:16 crc kubenswrapper[4814]: E1202 15:25:16.157074 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" containerName="registry-server" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.157081 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" containerName="registry-server" Dec 02 15:25:16 crc kubenswrapper[4814]: E1202 15:25:16.157094 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" containerName="extract-utilities" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.157100 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" containerName="extract-utilities" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.157273 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3328f3-af2c-4579-929d-f9857cbccdb9" containerName="registry-server" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.158472 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.166629 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dkvpk"] Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.181022 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56gvk\" (UniqueName: \"kubernetes.io/projected/87821a7e-f907-4107-acf7-51eee52d30d7-kube-api-access-56gvk\") pod \"certified-operators-dkvpk\" (UID: \"87821a7e-f907-4107-acf7-51eee52d30d7\") " pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.181069 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87821a7e-f907-4107-acf7-51eee52d30d7-utilities\") pod \"certified-operators-dkvpk\" (UID: \"87821a7e-f907-4107-acf7-51eee52d30d7\") " pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.181362 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87821a7e-f907-4107-acf7-51eee52d30d7-catalog-content\") pod \"certified-operators-dkvpk\" (UID: \"87821a7e-f907-4107-acf7-51eee52d30d7\") " pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.282284 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87821a7e-f907-4107-acf7-51eee52d30d7-catalog-content\") pod \"certified-operators-dkvpk\" (UID: \"87821a7e-f907-4107-acf7-51eee52d30d7\") " pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.282398 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87821a7e-f907-4107-acf7-51eee52d30d7-utilities\") pod \"certified-operators-dkvpk\" (UID: \"87821a7e-f907-4107-acf7-51eee52d30d7\") " pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.282415 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56gvk\" (UniqueName: \"kubernetes.io/projected/87821a7e-f907-4107-acf7-51eee52d30d7-kube-api-access-56gvk\") pod \"certified-operators-dkvpk\" (UID: \"87821a7e-f907-4107-acf7-51eee52d30d7\") " pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.283297 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87821a7e-f907-4107-acf7-51eee52d30d7-catalog-content\") pod \"certified-operators-dkvpk\" (UID: \"87821a7e-f907-4107-acf7-51eee52d30d7\") " pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.283538 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87821a7e-f907-4107-acf7-51eee52d30d7-utilities\") pod \"certified-operators-dkvpk\" (UID: \"87821a7e-f907-4107-acf7-51eee52d30d7\") " pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.303221 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56gvk\" (UniqueName: \"kubernetes.io/projected/87821a7e-f907-4107-acf7-51eee52d30d7-kube-api-access-56gvk\") pod \"certified-operators-dkvpk\" (UID: \"87821a7e-f907-4107-acf7-51eee52d30d7\") " pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.419928 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-api-0_fa7e0551-c005-4812-b91c-9913814f747b/watcher-api/0.log" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.420096 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-api-0_fa7e0551-c005-4812-b91c-9913814f747b/watcher-kuttl-api-log/0.log" Dec 02 15:25:16 crc kubenswrapper[4814]: I1202 15:25:16.484173 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:17 crc kubenswrapper[4814]: I1202 15:25:17.042625 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-applier-0_320ded89-e851-4a10-897f-bcb7127b6d22/watcher-applier/0.log" Dec 02 15:25:17 crc kubenswrapper[4814]: I1202 15:25:17.099516 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dkvpk"] Dec 02 15:25:17 crc kubenswrapper[4814]: I1202 15:25:17.188538 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-db-sync-bdfjp_d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532/watcher-kuttl-db-sync/0.log" Dec 02 15:25:17 crc kubenswrapper[4814]: I1202 15:25:17.335064 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_9b519b57-d380-444e-863b-b42e4d4eb06c/watcher-decision-engine/6.log" Dec 02 15:25:17 crc kubenswrapper[4814]: I1202 15:25:17.391195 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_9b519b57-d380-444e-863b-b42e4d4eb06c/watcher-decision-engine/6.log" Dec 02 15:25:18 crc kubenswrapper[4814]: I1202 15:25:18.057272 4814 generic.go:334] "Generic (PLEG): container finished" podID="87821a7e-f907-4107-acf7-51eee52d30d7" containerID="46eeecf086ed9ecec798c4dd5db70fd93ba335acdd94f0563e959b39a9793532" exitCode=0 Dec 02 15:25:18 crc kubenswrapper[4814]: I1202 15:25:18.057345 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkvpk" event={"ID":"87821a7e-f907-4107-acf7-51eee52d30d7","Type":"ContainerDied","Data":"46eeecf086ed9ecec798c4dd5db70fd93ba335acdd94f0563e959b39a9793532"} Dec 02 15:25:18 crc kubenswrapper[4814]: I1202 15:25:18.057380 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkvpk" event={"ID":"87821a7e-f907-4107-acf7-51eee52d30d7","Type":"ContainerStarted","Data":"44ef18f0cd2e336941553c6827e4dca5cf92a203ada588ee9489dcbec0d3828a"} Dec 02 15:25:20 crc kubenswrapper[4814]: I1202 15:25:20.090474 4814 generic.go:334] "Generic (PLEG): container finished" podID="87821a7e-f907-4107-acf7-51eee52d30d7" containerID="b649bc0730578f28bc151ac2add0b7a2bdb6d24a1eac2cba3fc825d58980fedd" exitCode=0 Dec 02 15:25:20 crc kubenswrapper[4814]: I1202 15:25:20.090964 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkvpk" event={"ID":"87821a7e-f907-4107-acf7-51eee52d30d7","Type":"ContainerDied","Data":"b649bc0730578f28bc151ac2add0b7a2bdb6d24a1eac2cba3fc825d58980fedd"} Dec 02 15:25:21 crc kubenswrapper[4814]: I1202 15:25:21.099264 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkvpk" event={"ID":"87821a7e-f907-4107-acf7-51eee52d30d7","Type":"ContainerStarted","Data":"a66263b1fde86e6bedc4a2d57d15671cecc92043fa2501cd002b7f420f574d5d"} Dec 02 15:25:21 crc kubenswrapper[4814]: I1202 15:25:21.115647 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dkvpk" podStartSLOduration=2.380286402 podStartE2EDuration="5.115631276s" podCreationTimestamp="2025-12-02 15:25:16 +0000 UTC" firstStartedPulling="2025-12-02 15:25:18.059452961 +0000 UTC m=+3380.183815165" lastFinishedPulling="2025-12-02 15:25:20.794797845 +0000 UTC m=+3382.919160039" observedRunningTime="2025-12-02 15:25:21.114414732 +0000 UTC m=+3383.238776936" watchObservedRunningTime="2025-12-02 15:25:21.115631276 +0000 UTC m=+3383.239993480" Dec 02 15:25:23 crc kubenswrapper[4814]: I1202 15:25:23.146870 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:25:23 crc kubenswrapper[4814]: E1202 15:25:23.147681 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:25:26 crc kubenswrapper[4814]: I1202 15:25:26.484795 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:26 crc kubenswrapper[4814]: I1202 15:25:26.485200 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:26 crc kubenswrapper[4814]: I1202 15:25:26.530974 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:27 crc kubenswrapper[4814]: I1202 15:25:27.194674 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:27 crc kubenswrapper[4814]: I1202 15:25:27.818408 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_memcached-0_64bd5b07-0227-4d38-9275-f1d59d7ce529/memcached/0.log" Dec 02 15:25:30 crc kubenswrapper[4814]: I1202 15:25:30.140744 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dkvpk"] Dec 02 15:25:30 crc kubenswrapper[4814]: I1202 15:25:30.141311 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dkvpk" podUID="87821a7e-f907-4107-acf7-51eee52d30d7" containerName="registry-server" containerID="cri-o://a66263b1fde86e6bedc4a2d57d15671cecc92043fa2501cd002b7f420f574d5d" gracePeriod=2 Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.096020 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.201777 4814 generic.go:334] "Generic (PLEG): container finished" podID="87821a7e-f907-4107-acf7-51eee52d30d7" containerID="a66263b1fde86e6bedc4a2d57d15671cecc92043fa2501cd002b7f420f574d5d" exitCode=0 Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.202197 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkvpk" event={"ID":"87821a7e-f907-4107-acf7-51eee52d30d7","Type":"ContainerDied","Data":"a66263b1fde86e6bedc4a2d57d15671cecc92043fa2501cd002b7f420f574d5d"} Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.202259 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkvpk" event={"ID":"87821a7e-f907-4107-acf7-51eee52d30d7","Type":"ContainerDied","Data":"44ef18f0cd2e336941553c6827e4dca5cf92a203ada588ee9489dcbec0d3828a"} Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.202288 4814 scope.go:117] "RemoveContainer" containerID="a66263b1fde86e6bedc4a2d57d15671cecc92043fa2501cd002b7f420f574d5d" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.202431 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkvpk" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.222256 4814 scope.go:117] "RemoveContainer" containerID="b649bc0730578f28bc151ac2add0b7a2bdb6d24a1eac2cba3fc825d58980fedd" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.229221 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56gvk\" (UniqueName: \"kubernetes.io/projected/87821a7e-f907-4107-acf7-51eee52d30d7-kube-api-access-56gvk\") pod \"87821a7e-f907-4107-acf7-51eee52d30d7\" (UID: \"87821a7e-f907-4107-acf7-51eee52d30d7\") " Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.229299 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87821a7e-f907-4107-acf7-51eee52d30d7-catalog-content\") pod \"87821a7e-f907-4107-acf7-51eee52d30d7\" (UID: \"87821a7e-f907-4107-acf7-51eee52d30d7\") " Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.229328 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87821a7e-f907-4107-acf7-51eee52d30d7-utilities\") pod \"87821a7e-f907-4107-acf7-51eee52d30d7\" (UID: \"87821a7e-f907-4107-acf7-51eee52d30d7\") " Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.230812 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87821a7e-f907-4107-acf7-51eee52d30d7-utilities" (OuterVolumeSpecName: "utilities") pod "87821a7e-f907-4107-acf7-51eee52d30d7" (UID: "87821a7e-f907-4107-acf7-51eee52d30d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.238164 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87821a7e-f907-4107-acf7-51eee52d30d7-kube-api-access-56gvk" (OuterVolumeSpecName: "kube-api-access-56gvk") pod "87821a7e-f907-4107-acf7-51eee52d30d7" (UID: "87821a7e-f907-4107-acf7-51eee52d30d7"). InnerVolumeSpecName "kube-api-access-56gvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.250992 4814 scope.go:117] "RemoveContainer" containerID="46eeecf086ed9ecec798c4dd5db70fd93ba335acdd94f0563e959b39a9793532" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.276448 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87821a7e-f907-4107-acf7-51eee52d30d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87821a7e-f907-4107-acf7-51eee52d30d7" (UID: "87821a7e-f907-4107-acf7-51eee52d30d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.310158 4814 scope.go:117] "RemoveContainer" containerID="a66263b1fde86e6bedc4a2d57d15671cecc92043fa2501cd002b7f420f574d5d" Dec 02 15:25:31 crc kubenswrapper[4814]: E1202 15:25:31.310597 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66263b1fde86e6bedc4a2d57d15671cecc92043fa2501cd002b7f420f574d5d\": container with ID starting with a66263b1fde86e6bedc4a2d57d15671cecc92043fa2501cd002b7f420f574d5d not found: ID does not exist" containerID="a66263b1fde86e6bedc4a2d57d15671cecc92043fa2501cd002b7f420f574d5d" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.310640 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66263b1fde86e6bedc4a2d57d15671cecc92043fa2501cd002b7f420f574d5d"} err="failed to get container status \"a66263b1fde86e6bedc4a2d57d15671cecc92043fa2501cd002b7f420f574d5d\": rpc error: code = NotFound desc = could not find container \"a66263b1fde86e6bedc4a2d57d15671cecc92043fa2501cd002b7f420f574d5d\": container with ID starting with a66263b1fde86e6bedc4a2d57d15671cecc92043fa2501cd002b7f420f574d5d not found: ID does not exist" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.310666 4814 scope.go:117] "RemoveContainer" containerID="b649bc0730578f28bc151ac2add0b7a2bdb6d24a1eac2cba3fc825d58980fedd" Dec 02 15:25:31 crc kubenswrapper[4814]: E1202 15:25:31.310992 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b649bc0730578f28bc151ac2add0b7a2bdb6d24a1eac2cba3fc825d58980fedd\": container with ID starting with b649bc0730578f28bc151ac2add0b7a2bdb6d24a1eac2cba3fc825d58980fedd not found: ID does not exist" containerID="b649bc0730578f28bc151ac2add0b7a2bdb6d24a1eac2cba3fc825d58980fedd" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.311025 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b649bc0730578f28bc151ac2add0b7a2bdb6d24a1eac2cba3fc825d58980fedd"} err="failed to get container status \"b649bc0730578f28bc151ac2add0b7a2bdb6d24a1eac2cba3fc825d58980fedd\": rpc error: code = NotFound desc = could not find container \"b649bc0730578f28bc151ac2add0b7a2bdb6d24a1eac2cba3fc825d58980fedd\": container with ID starting with b649bc0730578f28bc151ac2add0b7a2bdb6d24a1eac2cba3fc825d58980fedd not found: ID does not exist" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.311044 4814 scope.go:117] "RemoveContainer" containerID="46eeecf086ed9ecec798c4dd5db70fd93ba335acdd94f0563e959b39a9793532" Dec 02 15:25:31 crc kubenswrapper[4814]: E1202 15:25:31.311342 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46eeecf086ed9ecec798c4dd5db70fd93ba335acdd94f0563e959b39a9793532\": container with ID starting with 46eeecf086ed9ecec798c4dd5db70fd93ba335acdd94f0563e959b39a9793532 not found: ID does not exist" containerID="46eeecf086ed9ecec798c4dd5db70fd93ba335acdd94f0563e959b39a9793532" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.311373 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46eeecf086ed9ecec798c4dd5db70fd93ba335acdd94f0563e959b39a9793532"} err="failed to get container status \"46eeecf086ed9ecec798c4dd5db70fd93ba335acdd94f0563e959b39a9793532\": rpc error: code = NotFound desc = could not find container \"46eeecf086ed9ecec798c4dd5db70fd93ba335acdd94f0563e959b39a9793532\": container with ID starting with 46eeecf086ed9ecec798c4dd5db70fd93ba335acdd94f0563e959b39a9793532 not found: ID does not exist" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.332357 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56gvk\" (UniqueName: \"kubernetes.io/projected/87821a7e-f907-4107-acf7-51eee52d30d7-kube-api-access-56gvk\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.332386 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87821a7e-f907-4107-acf7-51eee52d30d7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.332396 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87821a7e-f907-4107-acf7-51eee52d30d7-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.532868 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dkvpk"] Dec 02 15:25:31 crc kubenswrapper[4814]: I1202 15:25:31.552307 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dkvpk"] Dec 02 15:25:32 crc kubenswrapper[4814]: I1202 15:25:32.160139 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87821a7e-f907-4107-acf7-51eee52d30d7" path="/var/lib/kubelet/pods/87821a7e-f907-4107-acf7-51eee52d30d7/volumes" Dec 02 15:25:36 crc kubenswrapper[4814]: I1202 15:25:36.088141 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9_ebcb42b8-8b25-4bff-92d5-f572066a7c53/util/0.log" Dec 02 15:25:36 crc kubenswrapper[4814]: I1202 15:25:36.144627 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:25:36 crc kubenswrapper[4814]: E1202 15:25:36.144896 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:25:36 crc kubenswrapper[4814]: I1202 15:25:36.232312 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9_ebcb42b8-8b25-4bff-92d5-f572066a7c53/util/0.log" Dec 02 15:25:36 crc kubenswrapper[4814]: I1202 15:25:36.304468 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9_ebcb42b8-8b25-4bff-92d5-f572066a7c53/pull/0.log" Dec 02 15:25:36 crc kubenswrapper[4814]: I1202 15:25:36.311055 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9_ebcb42b8-8b25-4bff-92d5-f572066a7c53/pull/0.log" Dec 02 15:25:36 crc kubenswrapper[4814]: I1202 15:25:36.509058 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9_ebcb42b8-8b25-4bff-92d5-f572066a7c53/util/0.log" Dec 02 15:25:36 crc kubenswrapper[4814]: I1202 15:25:36.531519 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9_ebcb42b8-8b25-4bff-92d5-f572066a7c53/extract/0.log" Dec 02 15:25:36 crc kubenswrapper[4814]: I1202 15:25:36.603449 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amdwd9_ebcb42b8-8b25-4bff-92d5-f572066a7c53/pull/0.log" Dec 02 15:25:36 crc kubenswrapper[4814]: I1202 15:25:36.685501 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2_325de51a-d49a-46b9-b0f0-08fbc59525b9/util/0.log" Dec 02 15:25:36 crc kubenswrapper[4814]: I1202 15:25:36.891820 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2_325de51a-d49a-46b9-b0f0-08fbc59525b9/pull/0.log" Dec 02 15:25:36 crc kubenswrapper[4814]: I1202 15:25:36.896618 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2_325de51a-d49a-46b9-b0f0-08fbc59525b9/util/0.log" Dec 02 15:25:36 crc kubenswrapper[4814]: I1202 15:25:36.902447 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2_325de51a-d49a-46b9-b0f0-08fbc59525b9/pull/0.log" Dec 02 15:25:37 crc kubenswrapper[4814]: I1202 15:25:37.116748 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2_325de51a-d49a-46b9-b0f0-08fbc59525b9/extract/0.log" Dec 02 15:25:37 crc kubenswrapper[4814]: I1202 15:25:37.133276 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2_325de51a-d49a-46b9-b0f0-08fbc59525b9/util/0.log" Dec 02 15:25:37 crc kubenswrapper[4814]: I1202 15:25:37.147017 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fn7pq2_325de51a-d49a-46b9-b0f0-08fbc59525b9/pull/0.log" Dec 02 15:25:37 crc kubenswrapper[4814]: I1202 15:25:37.326906 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf_66a95946-f709-4cb6-8285-982ae589bd5c/util/0.log" Dec 02 15:25:37 crc kubenswrapper[4814]: I1202 15:25:37.524024 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf_66a95946-f709-4cb6-8285-982ae589bd5c/util/0.log" Dec 02 15:25:37 crc kubenswrapper[4814]: I1202 15:25:37.540343 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf_66a95946-f709-4cb6-8285-982ae589bd5c/pull/0.log" Dec 02 15:25:37 crc kubenswrapper[4814]: I1202 15:25:37.570771 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf_66a95946-f709-4cb6-8285-982ae589bd5c/pull/0.log" Dec 02 15:25:37 crc kubenswrapper[4814]: I1202 15:25:37.777115 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf_66a95946-f709-4cb6-8285-982ae589bd5c/pull/0.log" Dec 02 15:25:37 crc kubenswrapper[4814]: I1202 15:25:37.814042 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf_66a95946-f709-4cb6-8285-982ae589bd5c/extract/0.log" Dec 02 15:25:37 crc kubenswrapper[4814]: I1202 15:25:37.817886 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kvttf_66a95946-f709-4cb6-8285-982ae589bd5c/util/0.log" Dec 02 15:25:37 crc kubenswrapper[4814]: I1202 15:25:37.969167 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4_99e6fcf2-2bab-4258-820c-09748fd9a378/util/0.log" Dec 02 15:25:38 crc kubenswrapper[4814]: I1202 15:25:38.273812 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4_99e6fcf2-2bab-4258-820c-09748fd9a378/util/0.log" Dec 02 15:25:38 crc kubenswrapper[4814]: I1202 15:25:38.330157 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4_99e6fcf2-2bab-4258-820c-09748fd9a378/pull/0.log" Dec 02 15:25:38 crc kubenswrapper[4814]: I1202 15:25:38.330373 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4_99e6fcf2-2bab-4258-820c-09748fd9a378/pull/0.log" Dec 02 15:25:38 crc kubenswrapper[4814]: I1202 15:25:38.519207 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4_99e6fcf2-2bab-4258-820c-09748fd9a378/util/0.log" Dec 02 15:25:38 crc kubenswrapper[4814]: I1202 15:25:38.529460 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4_99e6fcf2-2bab-4258-820c-09748fd9a378/extract/0.log" Dec 02 15:25:38 crc kubenswrapper[4814]: I1202 15:25:38.558752 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83cqgw4_99e6fcf2-2bab-4258-820c-09748fd9a378/pull/0.log" Dec 02 15:25:38 crc kubenswrapper[4814]: I1202 15:25:38.713380 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-csss6_ed0035d3-3c24-4850-a7d5-e606377481bb/extract-utilities/0.log" Dec 02 15:25:38 crc kubenswrapper[4814]: I1202 15:25:38.879713 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-csss6_ed0035d3-3c24-4850-a7d5-e606377481bb/extract-utilities/0.log" Dec 02 15:25:38 crc kubenswrapper[4814]: I1202 15:25:38.893380 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-csss6_ed0035d3-3c24-4850-a7d5-e606377481bb/extract-content/0.log" Dec 02 15:25:39 crc kubenswrapper[4814]: I1202 15:25:39.003631 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-csss6_ed0035d3-3c24-4850-a7d5-e606377481bb/extract-content/0.log" Dec 02 15:25:39 crc kubenswrapper[4814]: I1202 15:25:39.161382 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-csss6_ed0035d3-3c24-4850-a7d5-e606377481bb/extract-utilities/0.log" Dec 02 15:25:39 crc kubenswrapper[4814]: I1202 15:25:39.242638 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-csss6_ed0035d3-3c24-4850-a7d5-e606377481bb/extract-content/0.log" Dec 02 15:25:39 crc kubenswrapper[4814]: I1202 15:25:39.450646 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2sf56_a4566b4a-d525-4e3c-8a3b-b1825d8e736e/extract-utilities/0.log" Dec 02 15:25:39 crc kubenswrapper[4814]: I1202 15:25:39.686967 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2sf56_a4566b4a-d525-4e3c-8a3b-b1825d8e736e/extract-utilities/0.log" Dec 02 15:25:39 crc kubenswrapper[4814]: I1202 15:25:39.687583 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2sf56_a4566b4a-d525-4e3c-8a3b-b1825d8e736e/extract-content/0.log" Dec 02 15:25:39 crc kubenswrapper[4814]: I1202 15:25:39.785466 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2sf56_a4566b4a-d525-4e3c-8a3b-b1825d8e736e/extract-content/0.log" Dec 02 15:25:39 crc kubenswrapper[4814]: I1202 15:25:39.798608 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-csss6_ed0035d3-3c24-4850-a7d5-e606377481bb/registry-server/0.log" Dec 02 15:25:40 crc kubenswrapper[4814]: I1202 15:25:40.086716 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2sf56_a4566b4a-d525-4e3c-8a3b-b1825d8e736e/extract-utilities/0.log" Dec 02 15:25:40 crc kubenswrapper[4814]: I1202 15:25:40.140650 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2sf56_a4566b4a-d525-4e3c-8a3b-b1825d8e736e/extract-content/0.log" Dec 02 15:25:40 crc kubenswrapper[4814]: I1202 15:25:40.207322 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wmlq8_cd894607-26ea-4312-98b1-99c3cdcd522a/marketplace-operator/0.log" Dec 02 15:25:40 crc kubenswrapper[4814]: I1202 15:25:40.407759 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9xllv_cea53aaa-316d-439c-9269-0d09e8d23eef/extract-utilities/0.log" Dec 02 15:25:40 crc kubenswrapper[4814]: I1202 15:25:40.565874 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9xllv_cea53aaa-316d-439c-9269-0d09e8d23eef/extract-content/0.log" Dec 02 15:25:40 crc kubenswrapper[4814]: I1202 15:25:40.594108 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9xllv_cea53aaa-316d-439c-9269-0d09e8d23eef/extract-content/0.log" Dec 02 15:25:40 crc kubenswrapper[4814]: I1202 15:25:40.644864 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9xllv_cea53aaa-316d-439c-9269-0d09e8d23eef/extract-utilities/0.log" Dec 02 15:25:40 crc kubenswrapper[4814]: I1202 15:25:40.672442 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2sf56_a4566b4a-d525-4e3c-8a3b-b1825d8e736e/registry-server/0.log" Dec 02 15:25:40 crc kubenswrapper[4814]: I1202 15:25:40.757136 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9xllv_cea53aaa-316d-439c-9269-0d09e8d23eef/extract-utilities/0.log" Dec 02 15:25:40 crc kubenswrapper[4814]: I1202 15:25:40.782702 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9xllv_cea53aaa-316d-439c-9269-0d09e8d23eef/extract-content/0.log" Dec 02 15:25:40 crc kubenswrapper[4814]: I1202 15:25:40.918117 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tf8jl_53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a/extract-utilities/0.log" Dec 02 15:25:40 crc kubenswrapper[4814]: I1202 15:25:40.941601 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9xllv_cea53aaa-316d-439c-9269-0d09e8d23eef/registry-server/0.log" Dec 02 15:25:41 crc kubenswrapper[4814]: I1202 15:25:41.075037 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tf8jl_53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a/extract-content/0.log" Dec 02 15:25:41 crc kubenswrapper[4814]: I1202 15:25:41.110462 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tf8jl_53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a/extract-utilities/0.log" Dec 02 15:25:41 crc kubenswrapper[4814]: I1202 15:25:41.135484 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tf8jl_53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a/extract-content/0.log" Dec 02 15:25:41 crc kubenswrapper[4814]: I1202 15:25:41.362492 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tf8jl_53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a/extract-utilities/0.log" Dec 02 15:25:41 crc kubenswrapper[4814]: I1202 15:25:41.364991 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tf8jl_53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a/extract-content/0.log" Dec 02 15:25:41 crc kubenswrapper[4814]: I1202 15:25:41.448984 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tf8jl_53a60a44-e1ce-42c4-9ad6-fcf7ec5a4e3a/registry-server/0.log" Dec 02 15:25:50 crc kubenswrapper[4814]: I1202 15:25:50.146445 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:25:50 crc kubenswrapper[4814]: E1202 15:25:50.147415 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:25:55 crc kubenswrapper[4814]: I1202 15:25:55.397948 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-jvfff_5f531683-ec7c-4da1-a867-ddc54e0ac5ee/prometheus-operator/0.log" Dec 02 15:25:55 crc kubenswrapper[4814]: I1202 15:25:55.575703 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f47495dc9-n2pjh_7b27a473-e399-48d4-b811-564ad42ff316/prometheus-operator-admission-webhook/0.log" Dec 02 15:25:55 crc kubenswrapper[4814]: I1202 15:25:55.629413 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f47495dc9-qjf5d_7a7f89f6-1e09-4ba3-a4f4-c0c81e38a855/prometheus-operator-admission-webhook/0.log" Dec 02 15:25:55 crc kubenswrapper[4814]: I1202 15:25:55.759488 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-pf955_b4635ec6-50f3-4e74-9cc9-e3abbe7461fe/operator/0.log" Dec 02 15:25:55 crc kubenswrapper[4814]: I1202 15:25:55.793821 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-kk8rg_ffd9eb8f-6373-4ad5-8b9a-f05fdc0a566f/observability-ui-dashboards/0.log" Dec 02 15:25:55 crc kubenswrapper[4814]: I1202 15:25:55.912101 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-zjnzp_564529e2-c4c3-4a3a-8b02-7dc44894e427/perses-operator/0.log" Dec 02 15:26:02 crc kubenswrapper[4814]: I1202 15:26:02.146388 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:26:02 crc kubenswrapper[4814]: E1202 15:26:02.146839 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:26:16 crc kubenswrapper[4814]: I1202 15:26:16.156407 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:26:16 crc kubenswrapper[4814]: E1202 15:26:16.157681 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:26:28 crc kubenswrapper[4814]: I1202 15:26:28.150776 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:26:28 crc kubenswrapper[4814]: E1202 15:26:28.151539 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:26:39 crc kubenswrapper[4814]: I1202 15:26:39.144469 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:26:39 crc kubenswrapper[4814]: E1202 15:26:39.145269 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:26:50 crc kubenswrapper[4814]: I1202 15:26:50.044455 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-d647-account-create-update-n5j8w"] Dec 02 15:26:50 crc kubenswrapper[4814]: I1202 15:26:50.051797 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-xvxsj"] Dec 02 15:26:50 crc kubenswrapper[4814]: I1202 15:26:50.058056 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-d647-account-create-update-n5j8w"] Dec 02 15:26:50 crc kubenswrapper[4814]: I1202 15:26:50.065021 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-xvxsj"] Dec 02 15:26:50 crc kubenswrapper[4814]: I1202 15:26:50.154789 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="001710ba-b668-41f4-a817-fcd45782dd05" path="/var/lib/kubelet/pods/001710ba-b668-41f4-a817-fcd45782dd05/volumes" Dec 02 15:26:50 crc kubenswrapper[4814]: I1202 15:26:50.155532 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7fa1510-c400-479b-9471-f8c1514984c0" path="/var/lib/kubelet/pods/d7fa1510-c400-479b-9471-f8c1514984c0/volumes" Dec 02 15:26:50 crc kubenswrapper[4814]: I1202 15:26:50.651390 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:26:50 crc kubenswrapper[4814]: I1202 15:26:50.651450 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:26:53 crc kubenswrapper[4814]: I1202 15:26:53.145004 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:26:53 crc kubenswrapper[4814]: E1202 15:26:53.145507 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:26:57 crc kubenswrapper[4814]: I1202 15:26:57.020305 4814 generic.go:334] "Generic (PLEG): container finished" podID="e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0" containerID="4761e84f91fe77be0a5e1d405cabbfcab55ff2497f6e47f69804996aaee12362" exitCode=0 Dec 02 15:26:57 crc kubenswrapper[4814]: I1202 15:26:57.020373 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkh4f/must-gather-wk9zz" event={"ID":"e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0","Type":"ContainerDied","Data":"4761e84f91fe77be0a5e1d405cabbfcab55ff2497f6e47f69804996aaee12362"} Dec 02 15:26:57 crc kubenswrapper[4814]: I1202 15:26:57.021737 4814 scope.go:117] "RemoveContainer" containerID="4761e84f91fe77be0a5e1d405cabbfcab55ff2497f6e47f69804996aaee12362" Dec 02 15:26:57 crc kubenswrapper[4814]: I1202 15:26:57.057642 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp"] Dec 02 15:26:57 crc kubenswrapper[4814]: I1202 15:26:57.063783 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bdfjp"] Dec 02 15:26:57 crc kubenswrapper[4814]: I1202 15:26:57.635285 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jkh4f_must-gather-wk9zz_e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0/gather/0.log" Dec 02 15:26:58 crc kubenswrapper[4814]: I1202 15:26:58.162393 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532" path="/var/lib/kubelet/pods/d36a7f25-7b4d-4e8b-840d-ccfd8f0e5532/volumes" Dec 02 15:27:01 crc kubenswrapper[4814]: I1202 15:27:01.709095 4814 scope.go:117] "RemoveContainer" containerID="434f413a71f702f0105d3c78f35b13c833775f312e0ff59da4dade67f1e2c7a0" Dec 02 15:27:01 crc kubenswrapper[4814]: I1202 15:27:01.752701 4814 scope.go:117] "RemoveContainer" containerID="bab05be1e0dde0211684a40cd93614a2f42804549ba6ff224d6ffb7fd1e2e43c" Dec 02 15:27:01 crc kubenswrapper[4814]: I1202 15:27:01.772787 4814 scope.go:117] "RemoveContainer" containerID="1ec825873efcd41122d412cc0a5abd1226d53aa8a782a7e5509641d377cddc02" Dec 02 15:27:04 crc kubenswrapper[4814]: I1202 15:27:04.145601 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:27:04 crc kubenswrapper[4814]: E1202 15:27:04.146575 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:27:05 crc kubenswrapper[4814]: I1202 15:27:05.773721 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jkh4f/must-gather-wk9zz"] Dec 02 15:27:05 crc kubenswrapper[4814]: I1202 15:27:05.774391 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jkh4f/must-gather-wk9zz" podUID="e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0" containerName="copy" containerID="cri-o://775dd04927be1adbadef93b76a25846e6a043fefbe79685f0c071b33654e5391" gracePeriod=2 Dec 02 15:27:05 crc kubenswrapper[4814]: I1202 15:27:05.781758 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jkh4f/must-gather-wk9zz"] Dec 02 15:27:06 crc kubenswrapper[4814]: I1202 15:27:06.099804 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jkh4f_must-gather-wk9zz_e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0/copy/0.log" Dec 02 15:27:06 crc kubenswrapper[4814]: I1202 15:27:06.101472 4814 generic.go:334] "Generic (PLEG): container finished" podID="e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0" containerID="775dd04927be1adbadef93b76a25846e6a043fefbe79685f0c071b33654e5391" exitCode=143 Dec 02 15:27:06 crc kubenswrapper[4814]: I1202 15:27:06.174388 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jkh4f_must-gather-wk9zz_e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0/copy/0.log" Dec 02 15:27:06 crc kubenswrapper[4814]: I1202 15:27:06.174826 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkh4f/must-gather-wk9zz" Dec 02 15:27:06 crc kubenswrapper[4814]: I1202 15:27:06.364520 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t2x4\" (UniqueName: \"kubernetes.io/projected/e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0-kube-api-access-5t2x4\") pod \"e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0\" (UID: \"e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0\") " Dec 02 15:27:06 crc kubenswrapper[4814]: I1202 15:27:06.364612 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0-must-gather-output\") pod \"e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0\" (UID: \"e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0\") " Dec 02 15:27:06 crc kubenswrapper[4814]: I1202 15:27:06.370787 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0-kube-api-access-5t2x4" (OuterVolumeSpecName: "kube-api-access-5t2x4") pod "e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0" (UID: "e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0"). InnerVolumeSpecName "kube-api-access-5t2x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:27:06 crc kubenswrapper[4814]: I1202 15:27:06.463436 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0" (UID: "e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:27:06 crc kubenswrapper[4814]: I1202 15:27:06.466881 4814 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:06 crc kubenswrapper[4814]: I1202 15:27:06.466918 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t2x4\" (UniqueName: \"kubernetes.io/projected/e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0-kube-api-access-5t2x4\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:07 crc kubenswrapper[4814]: I1202 15:27:07.110584 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jkh4f_must-gather-wk9zz_e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0/copy/0.log" Dec 02 15:27:07 crc kubenswrapper[4814]: I1202 15:27:07.111030 4814 scope.go:117] "RemoveContainer" containerID="775dd04927be1adbadef93b76a25846e6a043fefbe79685f0c071b33654e5391" Dec 02 15:27:07 crc kubenswrapper[4814]: I1202 15:27:07.111214 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkh4f/must-gather-wk9zz" Dec 02 15:27:07 crc kubenswrapper[4814]: I1202 15:27:07.135913 4814 scope.go:117] "RemoveContainer" containerID="4761e84f91fe77be0a5e1d405cabbfcab55ff2497f6e47f69804996aaee12362" Dec 02 15:27:08 crc kubenswrapper[4814]: I1202 15:27:08.154181 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0" path="/var/lib/kubelet/pods/e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0/volumes" Dec 02 15:27:15 crc kubenswrapper[4814]: I1202 15:27:15.146176 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:27:15 crc kubenswrapper[4814]: E1202 15:27:15.147027 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:27:20 crc kubenswrapper[4814]: I1202 15:27:20.651310 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:27:20 crc kubenswrapper[4814]: I1202 15:27:20.651874 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:27:27 crc kubenswrapper[4814]: I1202 15:27:27.144518 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:27:27 crc kubenswrapper[4814]: E1202 15:27:27.145067 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.349825 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tsmgr"] Dec 02 15:27:38 crc kubenswrapper[4814]: E1202 15:27:38.350559 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87821a7e-f907-4107-acf7-51eee52d30d7" containerName="registry-server" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.350571 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="87821a7e-f907-4107-acf7-51eee52d30d7" containerName="registry-server" Dec 02 15:27:38 crc kubenswrapper[4814]: E1202 15:27:38.350581 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87821a7e-f907-4107-acf7-51eee52d30d7" containerName="extract-utilities" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.350588 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="87821a7e-f907-4107-acf7-51eee52d30d7" containerName="extract-utilities" Dec 02 15:27:38 crc kubenswrapper[4814]: E1202 15:27:38.350597 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0" containerName="copy" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.350603 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0" containerName="copy" Dec 02 15:27:38 crc kubenswrapper[4814]: E1202 15:27:38.350613 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0" containerName="gather" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.350619 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0" containerName="gather" Dec 02 15:27:38 crc kubenswrapper[4814]: E1202 15:27:38.350627 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87821a7e-f907-4107-acf7-51eee52d30d7" containerName="extract-content" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.350633 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="87821a7e-f907-4107-acf7-51eee52d30d7" containerName="extract-content" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.350799 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0" containerName="gather" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.350814 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e048f1f5-1d6e-47fd-8a76-ba3b7743e9b0" containerName="copy" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.350825 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="87821a7e-f907-4107-acf7-51eee52d30d7" containerName="registry-server" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.351909 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.369819 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tsmgr"] Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.524210 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a1cf9ea-c6dc-43f5-b631-9143e8290464-catalog-content\") pod \"community-operators-tsmgr\" (UID: \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\") " pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.524331 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a1cf9ea-c6dc-43f5-b631-9143e8290464-utilities\") pod \"community-operators-tsmgr\" (UID: \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\") " pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.524418 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvp9g\" (UniqueName: \"kubernetes.io/projected/6a1cf9ea-c6dc-43f5-b631-9143e8290464-kube-api-access-bvp9g\") pod \"community-operators-tsmgr\" (UID: \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\") " pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.625888 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvp9g\" (UniqueName: \"kubernetes.io/projected/6a1cf9ea-c6dc-43f5-b631-9143e8290464-kube-api-access-bvp9g\") pod \"community-operators-tsmgr\" (UID: \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\") " pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.625935 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a1cf9ea-c6dc-43f5-b631-9143e8290464-catalog-content\") pod \"community-operators-tsmgr\" (UID: \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\") " pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.625999 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a1cf9ea-c6dc-43f5-b631-9143e8290464-utilities\") pod \"community-operators-tsmgr\" (UID: \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\") " pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.626427 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a1cf9ea-c6dc-43f5-b631-9143e8290464-utilities\") pod \"community-operators-tsmgr\" (UID: \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\") " pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.626544 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a1cf9ea-c6dc-43f5-b631-9143e8290464-catalog-content\") pod \"community-operators-tsmgr\" (UID: \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\") " pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.661279 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvp9g\" (UniqueName: \"kubernetes.io/projected/6a1cf9ea-c6dc-43f5-b631-9143e8290464-kube-api-access-bvp9g\") pod \"community-operators-tsmgr\" (UID: \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\") " pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:38 crc kubenswrapper[4814]: I1202 15:27:38.688787 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:39 crc kubenswrapper[4814]: I1202 15:27:39.276912 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tsmgr"] Dec 02 15:27:39 crc kubenswrapper[4814]: I1202 15:27:39.391727 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsmgr" event={"ID":"6a1cf9ea-c6dc-43f5-b631-9143e8290464","Type":"ContainerStarted","Data":"4dff92ead6cd3f56014c93d51ef65ef9d1bdb331e153ffb90de208e757c451ef"} Dec 02 15:27:40 crc kubenswrapper[4814]: I1202 15:27:40.148668 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:27:40 crc kubenswrapper[4814]: E1202 15:27:40.149428 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:27:40 crc kubenswrapper[4814]: I1202 15:27:40.402301 4814 generic.go:334] "Generic (PLEG): container finished" podID="6a1cf9ea-c6dc-43f5-b631-9143e8290464" containerID="de438295d7ad9dd304662ae66c3af6cf22deecec8a1fb28f288c6acddd5bac45" exitCode=0 Dec 02 15:27:40 crc kubenswrapper[4814]: I1202 15:27:40.402338 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsmgr" event={"ID":"6a1cf9ea-c6dc-43f5-b631-9143e8290464","Type":"ContainerDied","Data":"de438295d7ad9dd304662ae66c3af6cf22deecec8a1fb28f288c6acddd5bac45"} Dec 02 15:27:40 crc kubenswrapper[4814]: I1202 15:27:40.405086 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:27:42 crc kubenswrapper[4814]: I1202 15:27:42.422540 4814 generic.go:334] "Generic (PLEG): container finished" podID="6a1cf9ea-c6dc-43f5-b631-9143e8290464" containerID="b9d0b3154974181e90a888e17623128333df33bb365d5fedf69bbcc246e33858" exitCode=0 Dec 02 15:27:42 crc kubenswrapper[4814]: I1202 15:27:42.423154 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsmgr" event={"ID":"6a1cf9ea-c6dc-43f5-b631-9143e8290464","Type":"ContainerDied","Data":"b9d0b3154974181e90a888e17623128333df33bb365d5fedf69bbcc246e33858"} Dec 02 15:27:43 crc kubenswrapper[4814]: I1202 15:27:43.442570 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsmgr" event={"ID":"6a1cf9ea-c6dc-43f5-b631-9143e8290464","Type":"ContainerStarted","Data":"4fce3f35d9591ae29e9b0a0a4fa38cefc0d44e05b495d1d2e819bc8754f59d7c"} Dec 02 15:27:43 crc kubenswrapper[4814]: I1202 15:27:43.477056 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tsmgr" podStartSLOduration=2.9059635 podStartE2EDuration="5.477028557s" podCreationTimestamp="2025-12-02 15:27:38 +0000 UTC" firstStartedPulling="2025-12-02 15:27:40.404851174 +0000 UTC m=+3522.529213378" lastFinishedPulling="2025-12-02 15:27:42.975916191 +0000 UTC m=+3525.100278435" observedRunningTime="2025-12-02 15:27:43.46704963 +0000 UTC m=+3525.591411854" watchObservedRunningTime="2025-12-02 15:27:43.477028557 +0000 UTC m=+3525.601390801" Dec 02 15:27:48 crc kubenswrapper[4814]: I1202 15:27:48.689410 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:48 crc kubenswrapper[4814]: I1202 15:27:48.689982 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:48 crc kubenswrapper[4814]: I1202 15:27:48.742899 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:49 crc kubenswrapper[4814]: I1202 15:27:49.563501 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:50 crc kubenswrapper[4814]: I1202 15:27:50.650949 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:27:50 crc kubenswrapper[4814]: I1202 15:27:50.651000 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:27:50 crc kubenswrapper[4814]: I1202 15:27:50.651038 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 15:27:50 crc kubenswrapper[4814]: I1202 15:27:50.651460 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33489b4ca9d6b6c45a94929ba3bec4490e82e2be7881895a3ad5083cdb0471ac"} pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:27:50 crc kubenswrapper[4814]: I1202 15:27:50.651712 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" containerID="cri-o://33489b4ca9d6b6c45a94929ba3bec4490e82e2be7881895a3ad5083cdb0471ac" gracePeriod=600 Dec 02 15:27:51 crc kubenswrapper[4814]: I1202 15:27:51.534083 4814 generic.go:334] "Generic (PLEG): container finished" podID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerID="33489b4ca9d6b6c45a94929ba3bec4490e82e2be7881895a3ad5083cdb0471ac" exitCode=0 Dec 02 15:27:51 crc kubenswrapper[4814]: I1202 15:27:51.534176 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerDied","Data":"33489b4ca9d6b6c45a94929ba3bec4490e82e2be7881895a3ad5083cdb0471ac"} Dec 02 15:27:51 crc kubenswrapper[4814]: I1202 15:27:51.534992 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerStarted","Data":"31ed9e71ed2bee81ac49097c0841e19b3fc4a7c32bb93cff13d37e111bfdc10f"} Dec 02 15:27:51 crc kubenswrapper[4814]: I1202 15:27:51.535093 4814 scope.go:117] "RemoveContainer" containerID="0f0253952fe7d912a8588d19cc501e4b02758c9c197255d9456f415b21ef9b29" Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.145049 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.340139 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tsmgr"] Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.340430 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tsmgr" podUID="6a1cf9ea-c6dc-43f5-b631-9143e8290464" containerName="registry-server" containerID="cri-o://4fce3f35d9591ae29e9b0a0a4fa38cefc0d44e05b495d1d2e819bc8754f59d7c" gracePeriod=2 Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.542472 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerStarted","Data":"dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6"} Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.551731 4814 generic.go:334] "Generic (PLEG): container finished" podID="6a1cf9ea-c6dc-43f5-b631-9143e8290464" containerID="4fce3f35d9591ae29e9b0a0a4fa38cefc0d44e05b495d1d2e819bc8754f59d7c" exitCode=0 Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.551779 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsmgr" event={"ID":"6a1cf9ea-c6dc-43f5-b631-9143e8290464","Type":"ContainerDied","Data":"4fce3f35d9591ae29e9b0a0a4fa38cefc0d44e05b495d1d2e819bc8754f59d7c"} Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.764507 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.787173 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a1cf9ea-c6dc-43f5-b631-9143e8290464-catalog-content\") pod \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\" (UID: \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\") " Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.787242 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a1cf9ea-c6dc-43f5-b631-9143e8290464-utilities\") pod \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\" (UID: \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\") " Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.787272 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvp9g\" (UniqueName: \"kubernetes.io/projected/6a1cf9ea-c6dc-43f5-b631-9143e8290464-kube-api-access-bvp9g\") pod \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\" (UID: \"6a1cf9ea-c6dc-43f5-b631-9143e8290464\") " Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.789133 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a1cf9ea-c6dc-43f5-b631-9143e8290464-utilities" (OuterVolumeSpecName: "utilities") pod "6a1cf9ea-c6dc-43f5-b631-9143e8290464" (UID: "6a1cf9ea-c6dc-43f5-b631-9143e8290464"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.826426 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1cf9ea-c6dc-43f5-b631-9143e8290464-kube-api-access-bvp9g" (OuterVolumeSpecName: "kube-api-access-bvp9g") pod "6a1cf9ea-c6dc-43f5-b631-9143e8290464" (UID: "6a1cf9ea-c6dc-43f5-b631-9143e8290464"). InnerVolumeSpecName "kube-api-access-bvp9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.839202 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a1cf9ea-c6dc-43f5-b631-9143e8290464-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a1cf9ea-c6dc-43f5-b631-9143e8290464" (UID: "6a1cf9ea-c6dc-43f5-b631-9143e8290464"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.897340 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a1cf9ea-c6dc-43f5-b631-9143e8290464-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.897383 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a1cf9ea-c6dc-43f5-b631-9143e8290464-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:52 crc kubenswrapper[4814]: I1202 15:27:52.897396 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvp9g\" (UniqueName: \"kubernetes.io/projected/6a1cf9ea-c6dc-43f5-b631-9143e8290464-kube-api-access-bvp9g\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:53 crc kubenswrapper[4814]: I1202 15:27:53.561924 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsmgr" event={"ID":"6a1cf9ea-c6dc-43f5-b631-9143e8290464","Type":"ContainerDied","Data":"4dff92ead6cd3f56014c93d51ef65ef9d1bdb331e153ffb90de208e757c451ef"} Dec 02 15:27:53 crc kubenswrapper[4814]: I1202 15:27:53.561980 4814 scope.go:117] "RemoveContainer" containerID="4fce3f35d9591ae29e9b0a0a4fa38cefc0d44e05b495d1d2e819bc8754f59d7c" Dec 02 15:27:53 crc kubenswrapper[4814]: I1202 15:27:53.562084 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsmgr" Dec 02 15:27:53 crc kubenswrapper[4814]: I1202 15:27:53.599153 4814 scope.go:117] "RemoveContainer" containerID="b9d0b3154974181e90a888e17623128333df33bb365d5fedf69bbcc246e33858" Dec 02 15:27:53 crc kubenswrapper[4814]: I1202 15:27:53.601513 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tsmgr"] Dec 02 15:27:53 crc kubenswrapper[4814]: I1202 15:27:53.611517 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tsmgr"] Dec 02 15:27:53 crc kubenswrapper[4814]: I1202 15:27:53.633023 4814 scope.go:117] "RemoveContainer" containerID="de438295d7ad9dd304662ae66c3af6cf22deecec8a1fb28f288c6acddd5bac45" Dec 02 15:27:54 crc kubenswrapper[4814]: I1202 15:27:54.154030 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a1cf9ea-c6dc-43f5-b631-9143e8290464" path="/var/lib/kubelet/pods/6a1cf9ea-c6dc-43f5-b631-9143e8290464/volumes" Dec 02 15:27:55 crc kubenswrapper[4814]: I1202 15:27:55.582895 4814 generic.go:334] "Generic (PLEG): container finished" podID="9b519b57-d380-444e-863b-b42e4d4eb06c" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" exitCode=1 Dec 02 15:27:55 crc kubenswrapper[4814]: I1202 15:27:55.582944 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9b519b57-d380-444e-863b-b42e4d4eb06c","Type":"ContainerDied","Data":"dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6"} Dec 02 15:27:55 crc kubenswrapper[4814]: I1202 15:27:55.582986 4814 scope.go:117] "RemoveContainer" containerID="37f6495325bf3b80df0c00d290ec2a8d346f3e440cdcaef39ed0a0c117e096b9" Dec 02 15:27:55 crc kubenswrapper[4814]: I1202 15:27:55.583641 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:27:55 crc kubenswrapper[4814]: E1202 15:27:55.593778 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:27:58 crc kubenswrapper[4814]: I1202 15:27:58.557421 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:27:58 crc kubenswrapper[4814]: I1202 15:27:58.557779 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:27:58 crc kubenswrapper[4814]: I1202 15:27:58.557799 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:27:58 crc kubenswrapper[4814]: I1202 15:27:58.557812 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 15:27:58 crc kubenswrapper[4814]: I1202 15:27:58.558514 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:27:58 crc kubenswrapper[4814]: E1202 15:27:58.558777 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:28:12 crc kubenswrapper[4814]: I1202 15:28:12.145598 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:28:12 crc kubenswrapper[4814]: E1202 15:28:12.146312 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.145030 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:28:25 crc kubenswrapper[4814]: E1202 15:28:25.146016 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.353290 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l6p66"] Dec 02 15:28:25 crc kubenswrapper[4814]: E1202 15:28:25.353614 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1cf9ea-c6dc-43f5-b631-9143e8290464" containerName="extract-content" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.353630 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1cf9ea-c6dc-43f5-b631-9143e8290464" containerName="extract-content" Dec 02 15:28:25 crc kubenswrapper[4814]: E1202 15:28:25.353645 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1cf9ea-c6dc-43f5-b631-9143e8290464" containerName="extract-utilities" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.353654 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1cf9ea-c6dc-43f5-b631-9143e8290464" containerName="extract-utilities" Dec 02 15:28:25 crc kubenswrapper[4814]: E1202 15:28:25.353672 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1cf9ea-c6dc-43f5-b631-9143e8290464" containerName="registry-server" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.353679 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1cf9ea-c6dc-43f5-b631-9143e8290464" containerName="registry-server" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.353845 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1cf9ea-c6dc-43f5-b631-9143e8290464" containerName="registry-server" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.354942 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.379704 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6p66"] Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.401358 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2e2464-4903-4605-ad93-973c5bbe3bf3-utilities\") pod \"redhat-marketplace-l6p66\" (UID: \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\") " pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.401749 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2e2464-4903-4605-ad93-973c5bbe3bf3-catalog-content\") pod \"redhat-marketplace-l6p66\" (UID: \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\") " pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.401895 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfwch\" (UniqueName: \"kubernetes.io/projected/6d2e2464-4903-4605-ad93-973c5bbe3bf3-kube-api-access-wfwch\") pod \"redhat-marketplace-l6p66\" (UID: \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\") " pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.503557 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2e2464-4903-4605-ad93-973c5bbe3bf3-utilities\") pod \"redhat-marketplace-l6p66\" (UID: \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\") " pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.503613 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2e2464-4903-4605-ad93-973c5bbe3bf3-catalog-content\") pod \"redhat-marketplace-l6p66\" (UID: \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\") " pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.503674 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfwch\" (UniqueName: \"kubernetes.io/projected/6d2e2464-4903-4605-ad93-973c5bbe3bf3-kube-api-access-wfwch\") pod \"redhat-marketplace-l6p66\" (UID: \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\") " pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.504486 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2e2464-4903-4605-ad93-973c5bbe3bf3-utilities\") pod \"redhat-marketplace-l6p66\" (UID: \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\") " pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.504761 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2e2464-4903-4605-ad93-973c5bbe3bf3-catalog-content\") pod \"redhat-marketplace-l6p66\" (UID: \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\") " pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.525063 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfwch\" (UniqueName: \"kubernetes.io/projected/6d2e2464-4903-4605-ad93-973c5bbe3bf3-kube-api-access-wfwch\") pod \"redhat-marketplace-l6p66\" (UID: \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\") " pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:25 crc kubenswrapper[4814]: I1202 15:28:25.676928 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:26 crc kubenswrapper[4814]: I1202 15:28:26.155905 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6p66"] Dec 02 15:28:26 crc kubenswrapper[4814]: I1202 15:28:26.861734 4814 generic.go:334] "Generic (PLEG): container finished" podID="6d2e2464-4903-4605-ad93-973c5bbe3bf3" containerID="35f80ba3feb657784ffc4301bdaa404925243e9bf24af3340861dcaefb7e2727" exitCode=0 Dec 02 15:28:26 crc kubenswrapper[4814]: I1202 15:28:26.862033 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6p66" event={"ID":"6d2e2464-4903-4605-ad93-973c5bbe3bf3","Type":"ContainerDied","Data":"35f80ba3feb657784ffc4301bdaa404925243e9bf24af3340861dcaefb7e2727"} Dec 02 15:28:26 crc kubenswrapper[4814]: I1202 15:28:26.862059 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6p66" event={"ID":"6d2e2464-4903-4605-ad93-973c5bbe3bf3","Type":"ContainerStarted","Data":"67c045c87e00e4035740ee105985febc92ce645fce77405465c7734a4db3cd44"} Dec 02 15:28:28 crc kubenswrapper[4814]: I1202 15:28:28.881817 4814 generic.go:334] "Generic (PLEG): container finished" podID="6d2e2464-4903-4605-ad93-973c5bbe3bf3" containerID="e5a90c4562de9ea36d3b9e0dc0602e1c0122fabaaeafded4d25234cbe72e7864" exitCode=0 Dec 02 15:28:28 crc kubenswrapper[4814]: I1202 15:28:28.881918 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6p66" event={"ID":"6d2e2464-4903-4605-ad93-973c5bbe3bf3","Type":"ContainerDied","Data":"e5a90c4562de9ea36d3b9e0dc0602e1c0122fabaaeafded4d25234cbe72e7864"} Dec 02 15:28:29 crc kubenswrapper[4814]: I1202 15:28:29.895200 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6p66" event={"ID":"6d2e2464-4903-4605-ad93-973c5bbe3bf3","Type":"ContainerStarted","Data":"5a032601c181b65abe3ecfcad166b08d9913b72732b160d52c9eeeef5b289405"} Dec 02 15:28:29 crc kubenswrapper[4814]: I1202 15:28:29.932425 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l6p66" podStartSLOduration=2.324137917 podStartE2EDuration="4.932399694s" podCreationTimestamp="2025-12-02 15:28:25 +0000 UTC" firstStartedPulling="2025-12-02 15:28:26.864481517 +0000 UTC m=+3568.988843761" lastFinishedPulling="2025-12-02 15:28:29.472743324 +0000 UTC m=+3571.597105538" observedRunningTime="2025-12-02 15:28:29.927060922 +0000 UTC m=+3572.051423146" watchObservedRunningTime="2025-12-02 15:28:29.932399694 +0000 UTC m=+3572.056761928" Dec 02 15:28:35 crc kubenswrapper[4814]: I1202 15:28:35.677483 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:35 crc kubenswrapper[4814]: I1202 15:28:35.678169 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:35 crc kubenswrapper[4814]: I1202 15:28:35.744870 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:35 crc kubenswrapper[4814]: I1202 15:28:35.984174 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:36 crc kubenswrapper[4814]: I1202 15:28:36.144853 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:28:36 crc kubenswrapper[4814]: E1202 15:28:36.145507 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:28:39 crc kubenswrapper[4814]: I1202 15:28:39.338253 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6p66"] Dec 02 15:28:39 crc kubenswrapper[4814]: I1202 15:28:39.338788 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l6p66" podUID="6d2e2464-4903-4605-ad93-973c5bbe3bf3" containerName="registry-server" containerID="cri-o://5a032601c181b65abe3ecfcad166b08d9913b72732b160d52c9eeeef5b289405" gracePeriod=2 Dec 02 15:28:40 crc kubenswrapper[4814]: I1202 15:28:40.015791 4814 generic.go:334] "Generic (PLEG): container finished" podID="6d2e2464-4903-4605-ad93-973c5bbe3bf3" containerID="5a032601c181b65abe3ecfcad166b08d9913b72732b160d52c9eeeef5b289405" exitCode=0 Dec 02 15:28:40 crc kubenswrapper[4814]: I1202 15:28:40.015836 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6p66" event={"ID":"6d2e2464-4903-4605-ad93-973c5bbe3bf3","Type":"ContainerDied","Data":"5a032601c181b65abe3ecfcad166b08d9913b72732b160d52c9eeeef5b289405"} Dec 02 15:28:40 crc kubenswrapper[4814]: I1202 15:28:40.330362 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:40 crc kubenswrapper[4814]: I1202 15:28:40.428760 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfwch\" (UniqueName: \"kubernetes.io/projected/6d2e2464-4903-4605-ad93-973c5bbe3bf3-kube-api-access-wfwch\") pod \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\" (UID: \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\") " Dec 02 15:28:40 crc kubenswrapper[4814]: I1202 15:28:40.428885 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2e2464-4903-4605-ad93-973c5bbe3bf3-utilities\") pod \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\" (UID: \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\") " Dec 02 15:28:40 crc kubenswrapper[4814]: I1202 15:28:40.428975 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2e2464-4903-4605-ad93-973c5bbe3bf3-catalog-content\") pod \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\" (UID: \"6d2e2464-4903-4605-ad93-973c5bbe3bf3\") " Dec 02 15:28:40 crc kubenswrapper[4814]: I1202 15:28:40.430159 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d2e2464-4903-4605-ad93-973c5bbe3bf3-utilities" (OuterVolumeSpecName: "utilities") pod "6d2e2464-4903-4605-ad93-973c5bbe3bf3" (UID: "6d2e2464-4903-4605-ad93-973c5bbe3bf3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:28:40 crc kubenswrapper[4814]: I1202 15:28:40.436480 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d2e2464-4903-4605-ad93-973c5bbe3bf3-kube-api-access-wfwch" (OuterVolumeSpecName: "kube-api-access-wfwch") pod "6d2e2464-4903-4605-ad93-973c5bbe3bf3" (UID: "6d2e2464-4903-4605-ad93-973c5bbe3bf3"). InnerVolumeSpecName "kube-api-access-wfwch". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:28:40 crc kubenswrapper[4814]: I1202 15:28:40.448961 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d2e2464-4903-4605-ad93-973c5bbe3bf3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d2e2464-4903-4605-ad93-973c5bbe3bf3" (UID: "6d2e2464-4903-4605-ad93-973c5bbe3bf3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:28:40 crc kubenswrapper[4814]: I1202 15:28:40.530979 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2e2464-4903-4605-ad93-973c5bbe3bf3-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:28:40 crc kubenswrapper[4814]: I1202 15:28:40.531011 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2e2464-4903-4605-ad93-973c5bbe3bf3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:28:40 crc kubenswrapper[4814]: I1202 15:28:40.531023 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfwch\" (UniqueName: \"kubernetes.io/projected/6d2e2464-4903-4605-ad93-973c5bbe3bf3-kube-api-access-wfwch\") on node \"crc\" DevicePath \"\"" Dec 02 15:28:41 crc kubenswrapper[4814]: I1202 15:28:41.029350 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6p66" event={"ID":"6d2e2464-4903-4605-ad93-973c5bbe3bf3","Type":"ContainerDied","Data":"67c045c87e00e4035740ee105985febc92ce645fce77405465c7734a4db3cd44"} Dec 02 15:28:41 crc kubenswrapper[4814]: I1202 15:28:41.029443 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6p66" Dec 02 15:28:41 crc kubenswrapper[4814]: I1202 15:28:41.029678 4814 scope.go:117] "RemoveContainer" containerID="5a032601c181b65abe3ecfcad166b08d9913b72732b160d52c9eeeef5b289405" Dec 02 15:28:41 crc kubenswrapper[4814]: I1202 15:28:41.053545 4814 scope.go:117] "RemoveContainer" containerID="e5a90c4562de9ea36d3b9e0dc0602e1c0122fabaaeafded4d25234cbe72e7864" Dec 02 15:28:41 crc kubenswrapper[4814]: I1202 15:28:41.072988 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6p66"] Dec 02 15:28:41 crc kubenswrapper[4814]: I1202 15:28:41.074782 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6p66"] Dec 02 15:28:41 crc kubenswrapper[4814]: I1202 15:28:41.096942 4814 scope.go:117] "RemoveContainer" containerID="35f80ba3feb657784ffc4301bdaa404925243e9bf24af3340861dcaefb7e2727" Dec 02 15:28:42 crc kubenswrapper[4814]: I1202 15:28:42.160480 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d2e2464-4903-4605-ad93-973c5bbe3bf3" path="/var/lib/kubelet/pods/6d2e2464-4903-4605-ad93-973c5bbe3bf3/volumes" Dec 02 15:28:50 crc kubenswrapper[4814]: I1202 15:28:50.144954 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:28:50 crc kubenswrapper[4814]: E1202 15:28:50.145563 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:29:05 crc kubenswrapper[4814]: I1202 15:29:05.144316 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:29:05 crc kubenswrapper[4814]: E1202 15:29:05.144990 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:29:16 crc kubenswrapper[4814]: I1202 15:29:16.144999 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:29:16 crc kubenswrapper[4814]: E1202 15:29:16.145804 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:29:29 crc kubenswrapper[4814]: I1202 15:29:29.144976 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:29:29 crc kubenswrapper[4814]: E1202 15:29:29.146111 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:29:41 crc kubenswrapper[4814]: I1202 15:29:41.144954 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:29:41 crc kubenswrapper[4814]: E1202 15:29:41.145940 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:29:50 crc kubenswrapper[4814]: I1202 15:29:50.651206 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:29:50 crc kubenswrapper[4814]: I1202 15:29:50.653164 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:29:53 crc kubenswrapper[4814]: I1202 15:29:53.145149 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:29:53 crc kubenswrapper[4814]: E1202 15:29:53.145677 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.173885 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q"] Dec 02 15:30:00 crc kubenswrapper[4814]: E1202 15:30:00.174882 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d2e2464-4903-4605-ad93-973c5bbe3bf3" containerName="registry-server" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.174898 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2e2464-4903-4605-ad93-973c5bbe3bf3" containerName="registry-server" Dec 02 15:30:00 crc kubenswrapper[4814]: E1202 15:30:00.174914 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d2e2464-4903-4605-ad93-973c5bbe3bf3" containerName="extract-utilities" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.174922 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2e2464-4903-4605-ad93-973c5bbe3bf3" containerName="extract-utilities" Dec 02 15:30:00 crc kubenswrapper[4814]: E1202 15:30:00.174938 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d2e2464-4903-4605-ad93-973c5bbe3bf3" containerName="extract-content" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.174946 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2e2464-4903-4605-ad93-973c5bbe3bf3" containerName="extract-content" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.175136 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d2e2464-4903-4605-ad93-973c5bbe3bf3" containerName="registry-server" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.175857 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.177937 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.178148 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.180990 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q"] Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.229066 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/885851b7-b2a4-4ac4-904e-d9566c0e3576-config-volume\") pod \"collect-profiles-29411490-mqc6q\" (UID: \"885851b7-b2a4-4ac4-904e-d9566c0e3576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.229112 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/885851b7-b2a4-4ac4-904e-d9566c0e3576-secret-volume\") pod \"collect-profiles-29411490-mqc6q\" (UID: \"885851b7-b2a4-4ac4-904e-d9566c0e3576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.229161 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kch67\" (UniqueName: \"kubernetes.io/projected/885851b7-b2a4-4ac4-904e-d9566c0e3576-kube-api-access-kch67\") pod \"collect-profiles-29411490-mqc6q\" (UID: \"885851b7-b2a4-4ac4-904e-d9566c0e3576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.330557 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/885851b7-b2a4-4ac4-904e-d9566c0e3576-config-volume\") pod \"collect-profiles-29411490-mqc6q\" (UID: \"885851b7-b2a4-4ac4-904e-d9566c0e3576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.330616 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/885851b7-b2a4-4ac4-904e-d9566c0e3576-secret-volume\") pod \"collect-profiles-29411490-mqc6q\" (UID: \"885851b7-b2a4-4ac4-904e-d9566c0e3576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.330651 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kch67\" (UniqueName: \"kubernetes.io/projected/885851b7-b2a4-4ac4-904e-d9566c0e3576-kube-api-access-kch67\") pod \"collect-profiles-29411490-mqc6q\" (UID: \"885851b7-b2a4-4ac4-904e-d9566c0e3576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.331634 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/885851b7-b2a4-4ac4-904e-d9566c0e3576-config-volume\") pod \"collect-profiles-29411490-mqc6q\" (UID: \"885851b7-b2a4-4ac4-904e-d9566c0e3576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.341632 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/885851b7-b2a4-4ac4-904e-d9566c0e3576-secret-volume\") pod \"collect-profiles-29411490-mqc6q\" (UID: \"885851b7-b2a4-4ac4-904e-d9566c0e3576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.348506 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kch67\" (UniqueName: \"kubernetes.io/projected/885851b7-b2a4-4ac4-904e-d9566c0e3576-kube-api-access-kch67\") pod \"collect-profiles-29411490-mqc6q\" (UID: \"885851b7-b2a4-4ac4-904e-d9566c0e3576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" Dec 02 15:30:00 crc kubenswrapper[4814]: I1202 15:30:00.534690 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" Dec 02 15:30:01 crc kubenswrapper[4814]: I1202 15:30:01.021678 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q"] Dec 02 15:30:01 crc kubenswrapper[4814]: W1202 15:30:01.023962 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod885851b7_b2a4_4ac4_904e_d9566c0e3576.slice/crio-64c7b1a4c591a0899b744491ec31f95d1443181f1585c3fefc3a122b16667f9e WatchSource:0}: Error finding container 64c7b1a4c591a0899b744491ec31f95d1443181f1585c3fefc3a122b16667f9e: Status 404 returned error can't find the container with id 64c7b1a4c591a0899b744491ec31f95d1443181f1585c3fefc3a122b16667f9e Dec 02 15:30:01 crc kubenswrapper[4814]: I1202 15:30:01.751760 4814 generic.go:334] "Generic (PLEG): container finished" podID="885851b7-b2a4-4ac4-904e-d9566c0e3576" containerID="79bf80559255a8b123e45d505c8fb91662ef2eb42fab361ddfb75740d413ff09" exitCode=0 Dec 02 15:30:01 crc kubenswrapper[4814]: I1202 15:30:01.751809 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" event={"ID":"885851b7-b2a4-4ac4-904e-d9566c0e3576","Type":"ContainerDied","Data":"79bf80559255a8b123e45d505c8fb91662ef2eb42fab361ddfb75740d413ff09"} Dec 02 15:30:01 crc kubenswrapper[4814]: I1202 15:30:01.751852 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" event={"ID":"885851b7-b2a4-4ac4-904e-d9566c0e3576","Type":"ContainerStarted","Data":"64c7b1a4c591a0899b744491ec31f95d1443181f1585c3fefc3a122b16667f9e"} Dec 02 15:30:03 crc kubenswrapper[4814]: I1202 15:30:03.199832 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" Dec 02 15:30:03 crc kubenswrapper[4814]: I1202 15:30:03.277892 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/885851b7-b2a4-4ac4-904e-d9566c0e3576-secret-volume\") pod \"885851b7-b2a4-4ac4-904e-d9566c0e3576\" (UID: \"885851b7-b2a4-4ac4-904e-d9566c0e3576\") " Dec 02 15:30:03 crc kubenswrapper[4814]: I1202 15:30:03.277943 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/885851b7-b2a4-4ac4-904e-d9566c0e3576-config-volume\") pod \"885851b7-b2a4-4ac4-904e-d9566c0e3576\" (UID: \"885851b7-b2a4-4ac4-904e-d9566c0e3576\") " Dec 02 15:30:03 crc kubenswrapper[4814]: I1202 15:30:03.277992 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kch67\" (UniqueName: \"kubernetes.io/projected/885851b7-b2a4-4ac4-904e-d9566c0e3576-kube-api-access-kch67\") pod \"885851b7-b2a4-4ac4-904e-d9566c0e3576\" (UID: \"885851b7-b2a4-4ac4-904e-d9566c0e3576\") " Dec 02 15:30:03 crc kubenswrapper[4814]: I1202 15:30:03.278852 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885851b7-b2a4-4ac4-904e-d9566c0e3576-config-volume" (OuterVolumeSpecName: "config-volume") pod "885851b7-b2a4-4ac4-904e-d9566c0e3576" (UID: "885851b7-b2a4-4ac4-904e-d9566c0e3576"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:30:03 crc kubenswrapper[4814]: I1202 15:30:03.279281 4814 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/885851b7-b2a4-4ac4-904e-d9566c0e3576-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:30:03 crc kubenswrapper[4814]: I1202 15:30:03.284713 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/885851b7-b2a4-4ac4-904e-d9566c0e3576-kube-api-access-kch67" (OuterVolumeSpecName: "kube-api-access-kch67") pod "885851b7-b2a4-4ac4-904e-d9566c0e3576" (UID: "885851b7-b2a4-4ac4-904e-d9566c0e3576"). InnerVolumeSpecName "kube-api-access-kch67". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:30:03 crc kubenswrapper[4814]: I1202 15:30:03.290888 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/885851b7-b2a4-4ac4-904e-d9566c0e3576-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "885851b7-b2a4-4ac4-904e-d9566c0e3576" (UID: "885851b7-b2a4-4ac4-904e-d9566c0e3576"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:30:03 crc kubenswrapper[4814]: I1202 15:30:03.380516 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kch67\" (UniqueName: \"kubernetes.io/projected/885851b7-b2a4-4ac4-904e-d9566c0e3576-kube-api-access-kch67\") on node \"crc\" DevicePath \"\"" Dec 02 15:30:03 crc kubenswrapper[4814]: I1202 15:30:03.380557 4814 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/885851b7-b2a4-4ac4-904e-d9566c0e3576-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:30:03 crc kubenswrapper[4814]: I1202 15:30:03.773245 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" event={"ID":"885851b7-b2a4-4ac4-904e-d9566c0e3576","Type":"ContainerDied","Data":"64c7b1a4c591a0899b744491ec31f95d1443181f1585c3fefc3a122b16667f9e"} Dec 02 15:30:03 crc kubenswrapper[4814]: I1202 15:30:03.773282 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64c7b1a4c591a0899b744491ec31f95d1443181f1585c3fefc3a122b16667f9e" Dec 02 15:30:03 crc kubenswrapper[4814]: I1202 15:30:03.773313 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-mqc6q" Dec 02 15:30:04 crc kubenswrapper[4814]: I1202 15:30:04.293673 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv"] Dec 02 15:30:04 crc kubenswrapper[4814]: I1202 15:30:04.304496 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411445-dq9jv"] Dec 02 15:30:06 crc kubenswrapper[4814]: I1202 15:30:06.145020 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:30:06 crc kubenswrapper[4814]: E1202 15:30:06.145479 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:30:06 crc kubenswrapper[4814]: I1202 15:30:06.160694 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6" path="/var/lib/kubelet/pods/0cb479f0-7d1d-426f-adad-7b9fa0c0b8d6/volumes" Dec 02 15:30:18 crc kubenswrapper[4814]: I1202 15:30:18.153532 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:30:18 crc kubenswrapper[4814]: E1202 15:30:18.154367 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:30:20 crc kubenswrapper[4814]: I1202 15:30:20.651631 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:30:20 crc kubenswrapper[4814]: I1202 15:30:20.652268 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:30:33 crc kubenswrapper[4814]: I1202 15:30:33.144246 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:30:33 crc kubenswrapper[4814]: E1202 15:30:33.145109 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:30:47 crc kubenswrapper[4814]: I1202 15:30:47.144190 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:30:47 crc kubenswrapper[4814]: E1202 15:30:47.145008 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:30:50 crc kubenswrapper[4814]: I1202 15:30:50.666496 4814 patch_prober.go:28] interesting pod/machine-config-daemon-k4kc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:30:50 crc kubenswrapper[4814]: I1202 15:30:50.666836 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:30:50 crc kubenswrapper[4814]: I1202 15:30:50.666884 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" Dec 02 15:30:50 crc kubenswrapper[4814]: I1202 15:30:50.668449 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31ed9e71ed2bee81ac49097c0841e19b3fc4a7c32bb93cff13d37e111bfdc10f"} pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:30:50 crc kubenswrapper[4814]: I1202 15:30:50.668664 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerName="machine-config-daemon" containerID="cri-o://31ed9e71ed2bee81ac49097c0841e19b3fc4a7c32bb93cff13d37e111bfdc10f" gracePeriod=600 Dec 02 15:30:50 crc kubenswrapper[4814]: E1202 15:30:50.795049 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:30:51 crc kubenswrapper[4814]: I1202 15:30:51.313975 4814 generic.go:334] "Generic (PLEG): container finished" podID="d93bbd5a-650e-4f70-9b1c-868df83a667c" containerID="31ed9e71ed2bee81ac49097c0841e19b3fc4a7c32bb93cff13d37e111bfdc10f" exitCode=0 Dec 02 15:30:51 crc kubenswrapper[4814]: I1202 15:30:51.314112 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" event={"ID":"d93bbd5a-650e-4f70-9b1c-868df83a667c","Type":"ContainerDied","Data":"31ed9e71ed2bee81ac49097c0841e19b3fc4a7c32bb93cff13d37e111bfdc10f"} Dec 02 15:30:51 crc kubenswrapper[4814]: I1202 15:30:51.314491 4814 scope.go:117] "RemoveContainer" containerID="33489b4ca9d6b6c45a94929ba3bec4490e82e2be7881895a3ad5083cdb0471ac" Dec 02 15:30:51 crc kubenswrapper[4814]: I1202 15:30:51.315218 4814 scope.go:117] "RemoveContainer" containerID="31ed9e71ed2bee81ac49097c0841e19b3fc4a7c32bb93cff13d37e111bfdc10f" Dec 02 15:30:51 crc kubenswrapper[4814]: E1202 15:30:51.315577 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:31:01 crc kubenswrapper[4814]: I1202 15:31:01.965697 4814 scope.go:117] "RemoveContainer" containerID="8a1fae10782fa141fd33364ef2d9b8972a6d381bcd8152966494fde1f91602f4" Dec 02 15:31:02 crc kubenswrapper[4814]: I1202 15:31:02.146435 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:31:02 crc kubenswrapper[4814]: E1202 15:31:02.146902 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:31:05 crc kubenswrapper[4814]: I1202 15:31:05.145537 4814 scope.go:117] "RemoveContainer" containerID="31ed9e71ed2bee81ac49097c0841e19b3fc4a7c32bb93cff13d37e111bfdc10f" Dec 02 15:31:05 crc kubenswrapper[4814]: E1202 15:31:05.146090 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:31:16 crc kubenswrapper[4814]: I1202 15:31:16.144772 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:31:16 crc kubenswrapper[4814]: E1202 15:31:16.145769 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:31:19 crc kubenswrapper[4814]: I1202 15:31:19.144629 4814 scope.go:117] "RemoveContainer" containerID="31ed9e71ed2bee81ac49097c0841e19b3fc4a7c32bb93cff13d37e111bfdc10f" Dec 02 15:31:19 crc kubenswrapper[4814]: E1202 15:31:19.145252 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:31:29 crc kubenswrapper[4814]: I1202 15:31:29.145354 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:31:29 crc kubenswrapper[4814]: E1202 15:31:29.146110 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:31:32 crc kubenswrapper[4814]: I1202 15:31:32.150483 4814 scope.go:117] "RemoveContainer" containerID="31ed9e71ed2bee81ac49097c0841e19b3fc4a7c32bb93cff13d37e111bfdc10f" Dec 02 15:31:32 crc kubenswrapper[4814]: E1202 15:31:32.151018 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:31:42 crc kubenswrapper[4814]: I1202 15:31:42.145143 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:31:42 crc kubenswrapper[4814]: E1202 15:31:42.146467 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:31:44 crc kubenswrapper[4814]: I1202 15:31:44.145149 4814 scope.go:117] "RemoveContainer" containerID="31ed9e71ed2bee81ac49097c0841e19b3fc4a7c32bb93cff13d37e111bfdc10f" Dec 02 15:31:44 crc kubenswrapper[4814]: E1202 15:31:44.145717 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c" Dec 02 15:31:55 crc kubenswrapper[4814]: I1202 15:31:55.145209 4814 scope.go:117] "RemoveContainer" containerID="dacbfebb88f96e5e05c1440b8d8e23ec9a57bb01016768cf09fa6f6569211ad6" Dec 02 15:31:55 crc kubenswrapper[4814]: E1202 15:31:55.146016 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-decision-engine pod=watcher-kuttl-decision-engine-0_watcher-kuttl-default(9b519b57-d380-444e-863b-b42e4d4eb06c)\"" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9b519b57-d380-444e-863b-b42e4d4eb06c" Dec 02 15:31:57 crc kubenswrapper[4814]: I1202 15:31:57.145065 4814 scope.go:117] "RemoveContainer" containerID="31ed9e71ed2bee81ac49097c0841e19b3fc4a7c32bb93cff13d37e111bfdc10f" Dec 02 15:31:57 crc kubenswrapper[4814]: E1202 15:31:57.145555 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k4kc8_openshift-machine-config-operator(d93bbd5a-650e-4f70-9b1c-868df83a667c)\"" pod="openshift-machine-config-operator/machine-config-daemon-k4kc8" podUID="d93bbd5a-650e-4f70-9b1c-868df83a667c"